Data Sciences

Our Data Sciences team is responsible for building, governing, and maximizing the use of Target’s data assets for elevated decision-making across our retail operations. We build certified data sets for analytics and decision modeling and develop data platforms for exploration, visualization and measurement. We create statistical forecasting models, optimize algorithms with machine learning techniques, and validate model performance – with both analytical AI and generative AI – all while delivering algorithmic decisioning at massive scale. We empower our Target team with the insights they need to improve product and system performance and further innovate to improve guest experience.

Target Data Sciences logo with an atom icon on the left and the text "Data Sciences" on the right of the icon

Recent blogs

  • Diagram showing the installed Poetry instance, with sections for "other version managers," "system," and "pyenv" with Python versions shown flowing to apps, pipx, and poetry

    Make Python DevEx

    March 29, 2024
    By Colin Dean
    How a 47+ year old tool can MAKE Python developer experience easier.
  • a person's hand holding a smart phone with a grocery list on the screen, including items like apples, orange juice, granola, and eggs

    Elevating Guest Repurchasing Behavior Using Buy It Again Recommendations

    November 2, 2023
    By Rankyung Park and Amit Pande
    Target’s Data Science team shares an inside look at our Buy It Again model.
  • Team member scans inventory in Target electronics aisle

    Solving for Product Availability with AI

    October 24, 2023
    By Brad Thompson and Meredith Jordan
    Read about how Target uses AI to improve product availability in stores.
  • TAC Architecture model showing a sequence dataset on the left with an arrow flowing right to a multi-layered model training, with an arrow to the right pointing to a representation of real time recommendations with model files and recommended items

    Target AutoComplete: Real Time Item Recommendations at Target

    July 25, 2023
    By Bhavtosh Rath
    A look at our Data Science team's patent-pending AI recommendation model
  • diagram showing how inputs from the Target web or mobile app feed into a personalization engine that flows through to three microservices that get their data from a Hadoop cluster, model files, and feed back into the microservices and to the feature store before being routed back to the consumer

    Real-Time Personalization Using Microservices

    May 11, 2023
    By Amit Pande, Pushkar Chennu, and Prathyusha Kanmanth Reddy
    How Target's Personalization team uses microservices to improve our guest experience
  • screenshot of JupyterLab Git extension in a browser window. Left hand side of the image shows a highlighted puzzle piece icon with a list of extensions with brief descriptions and install links underneath. The JupyterLab extension is highlighted with arrows pointing to the install button and extension manager.

    Developing JupyterLab Extensions

    August 2, 2022
    By Arman Shah
    Target’s technologists are encouraged to take advantage of “50 Days of Learning,” a program that enables engineers to spend time exploring new technologies or learning new languages and systems. I wanted to learn more about developing my own extensions and used some of my learning time to dive into the issue.
  • List of file names that get increasingly more complex, HighLevelOverview_Final, HighLevelOverview_FINALFINAL, etc.

    Requirements for Creating a Documentation Workflow Loved by Both Data Scientists and Engineers

    April 6, 2022
    By Colin Dean
    This is an adaptation of a presentation delivered to conferences including Write the Docs Portland 2020, Ohio Linuxfest OpenLibreFree 2020, and FOSDEM 2021. The presentation source is available at GitHub and recordings are available on YouTube. This is a two-part post that will share both the requirements and execution of the documentation workflow we built that is now used by many of our teammates and leaders. Read part two here.
  • word cloud with large words reading "pandoc" "mactex" "crossref" and smaller words around reading things like "make" and "docker" and "citeproc" among others

    Executing a Documentation Workflow

    April 6, 2022
    By Colin Dean
    This post is the second in a two-part series about creating a documentation workflow for data scientists and engineers. Click here to read the first post. This is an adaptation of a presentation delivered to conferences including Write the Docs Portland 2020, Ohio Linuxfest OpenLibreFree 2020, and FOSDEM 2021. The presentation source is available at GitHub and recordings are available on YouTube.
  • This image shows a line graph with the y axis of "latency" metrics from 0-400 measured in milliseconds, and a x axis showing "model type" with five BERT models listed. The graph shows a green upward trend line from 25 to 350 ms.

    Using BERT Model to Generate Real-time Embeddings

    March 23, 2022
    By Pushkar Chennu and Amit Pande
    How we chose and implemented an effective model to generate embeddings in real-time. Target has been exploring, leveraging, and releasing open source software for several years now, and we are seeing positive impact to how we work together already. In early 2021, our recommendations team started to consider real-time natural language input from guests, such as search queries, Instagram posts, and product reviews, because these signals can be useful for personalized product recommendations. We planned to generate representations of those guest inputs using the open source Bidirectional Encoder Representations from Transformers (BERT) model.
  • a view of a Target engineer dressed in a black sweater with red zig zag patterns looking at his screen with lines of code on it, in front of a window looking out on a city block of brick buildings

    Spring Boot Service-to-Service Communication

    December 18, 2018
    By Jeffrey Bursik and Pruthvi Dintakurthi
    This post will walk through our implementation of Spring Feign Client, our learnings, and how Spring Feign Client has helped manage our inner-service communication while reducing the amount of development time.