Talent Navigator – Career Advice with GenAI
The Internal Talent Navigator is a GenAI-powered career guidance tool that helps developers explore their next move by analyzing real-world career journeys. Using embeddings and similarity search, it finds peer profiles that match a user’s background and compares them with profiles from a target role. An LLM then synthesizes this information to offer personalized, grounded advice—turning abstract career exploration into concrete, data-backed suggestions.
🎥 Demo: Guided Career Advice from Real-World Paths
In this demo, I walk through how the Internal Talent Navigator works in action. I start by entering key profile details—education level, coding experience, job role, country, tools used, and more. With one click, the app uses vector similarity to surface the five most similar peer profiles and five that align closely with the user’s desired role. The GenAI assistant then steps in, synthesizing these profiles to infer a likely next career step and generate tailored, specific advice. The suggestions build on the user’s experience and reflect patterns found in the journeys of others—so the feedback is both realistic and relevant, not just AI guesswork.
🗂️ About the Dataset
This project uses the 2024 Stack Overflow Developer Survey, one of the most comprehensive datasets capturing global developer demographics, technologies, job roles, work preferences, and compensation insights. The dataset includes responses from over 65,000 developers across 185+ countries and covers a wide range of variables—from education level and coding experience to AI tool usage and job satisfaction. For this app, I focused on a curated subset of structured responses to explore and generate career advice using GenAI.
💡 What I Learned Building the Internal Talent Navigator
This project turned out to be a deep dive into how GenAI can be practically used to support real-world workforce mobility decisions. While I’ve built GenAI apps before, this one pushed me into fresh territory and taught me some valuable lessons:
Prompt Engineering for Reasoning Over Structured Data
I learned how tricky it can be to get an LLM to reason meaningfully from structured, real-world data. Since my underlying dataset (based on the StackOverflow Developer Survey) only included a handful of relevant features—like experience, languages used, tools, and job roles—I had to figure out how to:
- Craft prompts that guide the LLM to infer patterns from peers and target profiles.
- Prevent hallucinations (like the infamous “goal is becoming Australia” 😅) by grounding the prompt tightly to the available columns.
It was not just about inserting the data—it was about structuring the story so the LLM made realistic, personalized suggestions.
Designing Dual Embedding Spaces for Profile Matching
Instead of matching a user to just one set of profiles, I experimented with comparing:
- Peer profiles (similar background)
- Target role profiles (aspirational roles)
This dual retrieval approach made the advice more nuanced and grounded. I learned how to:
- Tune the embedding generation (and clean junk like <span> tags in the input data to improve accuracy)
- Split similarity logic across two separate intents
Debug why "similar profiles" sometimes weren't geographically or professionally aligned, and refine accordingly.
Iterative Debugging and Real-Time UX Feedback
Performance and UX matter more than you think. I hit issues with:
- Colab GPU quotas (hello, peak time burnout 😵)
- Storage limits on Hugging Face Spaces (turns out deletions do not always free up space)
- Needing a loading spinner to signal background LLM processing—small change, big impact!
These taught me the importance of a tight feedback loop between frontend, backend, and real users.
Making the Model Not Make Stuff Up
One of the most important things I learned was how easily LLMs can drift into generic or incorrect advice if left unchecked. I:
- Pruned the prompt aggressively to keep it grounded in the 20-column input schema
- Explicitly told the model not to invent goals
- Learned to identify whether the LLM was truly using the structured info I fed it, or just freestyling
Building With the End User in Mind
The heart of the project was the career explorer—helping developers understand not just where they are, but where they could go next, based on real-world peer journeys. That shifted my mindset from “model builder” to “experience designer.”
Final Thoughts
This was not just a project— it was a sandbox for everything that makes GenAI exciting and challenging: user alignment, data representation, retrieval, and prompting. It pushed me to blend creativity with constraint, and taught me how to deliver something that feels personal, helpful, and grounded in real-world data.