Psych Tech @ Work-logo

Psych Tech @ Work

Business & Economics Podcasts

Science 4-Hire is now Psych Tech @ Work! - a podcast about safe innovation at the intersection of psychological science, technology, and the future of work. Psych Tech @ Work promotes safe technological innovation and human/machine partnerships as an essential force in creating equilibrium and between psychology and commerce. Maintaining this balance in a time of unprecedented change is essential for ensuring that the future of work is ethical, positive, and prosperous. Creating such a future requires an unprecedented level of interdisciplinary collaboration. With the goal of educating, engaging, and inspiring others through thoughtful and practical discussions with guests from a wide variety of backgrounds and specialties, Psych Tech @ Work provides a smorgasbord of food for thought and practical takeaways about the issues that will make or break the future of work! charleshandler.substack.com

Location:

United States

Description:

Science 4-Hire is now Psych Tech @ Work! - a podcast about safe innovation at the intersection of psychological science, technology, and the future of work. Psych Tech @ Work promotes safe technological innovation and human/machine partnerships as an essential force in creating equilibrium and between psychology and commerce. Maintaining this balance in a time of unprecedented change is essential for ensuring that the future of work is ethical, positive, and prosperous. Creating such a future requires an unprecedented level of interdisciplinary collaboration. With the goal of educating, engaging, and inspiring others through thoughtful and practical discussions with guests from a wide variety of backgrounds and specialties, Psych Tech @ Work provides a smorgasbord of food for thought and practical takeaways about the issues that will make or break the future of work! charleshandler.substack.com

Language:

English


Episodes
Ask host to enable sharing for playback control

The Truth About AI Based Talent Assessment

2/23/2026
“The rules haven’t changed. The technology has — but the rules haven’t.” — Nathan Mondragon Episode Overview In this episode, I’m joined by my old friend (and now co-worker!) Nathan Mondragon, an IO psychologist and long-time leader in creating the future at the intersection of assessment science, hiring technology, and applied AI. Nathan and I have lived through multiple waves of “this will change everything” technology — from early online testing to video interviewing, machine learning, and now generative AI. And the beat goes on! Nathan and I have recently joined forces at ProboTalent where we are creating defensible AI based assessment tools. We talk about where AI has genuinely moved the field forward, where it hasn’t, and why so many of the debates we’re having today are versions of conversations we’ve been having for decades. Along the way, we unpack Nathan’s paradigm busting work at HireVue’, and why the fundamentals of good measurement haven’t changed — even as the tools have. Topics Discussed & Key Insights 1. The Rules of Good Assessment Haven’t Changed — We Just Keep Forgetting Them Nathan makes a point that anchors the entire episode: while technology has advanced dramatically, the core rules of good assessment — validity, relevance, interpretability, and fairness — are exactly the same. AI doesn’t get a pass on methodology. If anything, it raises the bar for rigor, because mistakes scale faster. 2. Early Hiring Tech Was Built to Solve Operational Problems, Not Measurement Problems We talk about the early days of online hiring and assessment, where the primary goal was digitization, not insight. Systems were designed to move paper processes online, not to improve how well we understand people. That legacy still shapes today’s platforms — and explains why so many tools feel efficient but shallow. 3. HireVue Was a Real Paradigm Shift — and It Required Scientific Courage Nathan reflects on the early days of HireVue and why it was genuinely revolutionary at the time. The breakthrough wasn’t just video — it was the larger shift toward digitizing and scaling structured assessment experiences in a way the field hadn’t seen before. What made this moment interesting from an IO psychology standpoint is that it required a different mindset as a scientist: being willing to engage with a new modality, even when the measurement implications weren’t fully understood yet. Innovation in assessment has always involved tension — between rigor and experimentation, between what’s proven and what’s possible. Nathan shares what it was like to help lead through that transition, and why thoughtful scientists have to be able to sit with uncertainty long enough to shape new approaches responsibly, rather than rejecting them outright. 4. AI Didn’t Create Bad Measurement — It Made It Easier to Scale A recurring theme: AI doesn’t magically improve weak constructs. If you feed it noisy proxies, you just get faster, more confident noise. We discuss why generative AI and machine learning don’t eliminate the need for careful construct definition — and why “it correlates” is not the same thing as “it measures something useful.” 5. Interactivity Matters More Than Modality One of the most important takeaways: the future of assessment isn’t about whether something is text, video, or simulation-based — it’s about how interactive and information-rich the experience is. Nathan explains why dynamic interaction reveals far more about decision-making, reasoning, and capability than static prompts ever will. 6. Native AI vs. Embedded AI Is a False Debate We unpack the difference between “AI-native” products and traditional tools with AI layered on top — and why this distinction often misses the point. What matters isn’t where AI lives in the stack, but whether it’s being used to improve interpretation, not just automate scoring or classification. 7. Skills and Knowledge Are Still Hard to Measure — and AI Has to Be Used...

Duration:00:48:53

Ask host to enable sharing for playback control

Why Recruiting Tech is (Still) Not Helping Candidates and How to Fix It

1/19/2026
“There’s this massive imbalance between the employer side of the recruiting equation where they’ve got all the tech, they’ve got all the weapons… Candidates don’t have anything.” –Doug Berg In this episode, I’m joined by Doug Berg, head matcher and big kahuna at Match2, a longtime builder and operator in the talent technology/recruitment space and the only guy I know that wears flip flops to HR Tech.. Doug has lived and hacked nearly every iteration of online hiring — from fax machines and early internet job fairs to today’s AI-powered recruiting chaos. Doug and I have lived parallel lives in some sense. We have both been on the scene as recruitment went on-line and have continued to wage war against the barriers that are blocking successful hiring. But Doug’s unique experience building recruiting focused tech helps us take a very well rounded perspective. Doug and I trace the psychology of hiring systems, why most recruiting technology still fails both candidates and employers, and how efficiency-driven design has quietly stripped humanity out of the process. We talk about what broke, why AI is making some problems worse before it makes them better, and what a candidate-centered future could actually look like if we stop designing hiring like a transactional funnel and start designing it like a relationship. Topics Discussed & Key Insights 1. Hiring Has Always Been Psychological — Ignoring That Is Why It Breaks Doug shares early recruiting stories that reveal a core truth: people don’t make job decisions based solely on skills or titles. They’re driven by values, aspirations, lifestyle preferences, and identity. Yet most hiring systems still treat people as static records instead of dynamic humans. Music to the ears of a psychologist like me! 2. Applicant Tracking Systems Were Built for Control, Not for Candidates We unpack how applicant tracking systems were designed for compliance and efficiency — not engagement. The result: * One-way transactions * Forced applications * Zero room for curiosity, context, or conversation Doug explains why this original design choice still haunts modern hiring. 3. AI Isn’t Breaking Hiring — It is Amplifying the Broken Parts AI didn’t invent hiring dysfunction — it amplified it. Candidates now apply to dozens of jobs at once using bots. Employers respond with more screening, more filters, more automation. The outcome? More noise. Less signal. Worse experiences on both sides. 4. Real Hiring Happens Through Interaction, Not “Efficiency” Doug tells stories about simple interventions — like proactive chat on career sites — that led to real hires for impossible-to-fill roles. The lesson is clear: when candidates are allowed to participate instead of comply, hiring actually works. 5. Hiring Will Stay Broken Until Candidates Control Their Side of the System One of the central ideas in the episode: candidates have never been given real agency. Doug explains the structural imbalance: * Companies control the systems * Candidates adapt or disappear We explore what changes when candidates control their own data, preferences, and relationships — and why that shift matters. 6. The Resume Is a Dead Artifact — Identity Needs to Be Portable Resumes are outdated snapshots. Doug makes the case for living profiles, portable personalization, and persistent relationships that move with the candidate across employers. AI finally makes this possible — not by enforcing rigid taxonomies, but by interpreting relevance across skills, experience, and context. 7. The Future of Hiring Should Feel Like Reconnection, Not Rejection We close by zooming out. Doug shares a simple but radical vision: if someone gets laid off on Friday, they shouldn’t start from zero. They should already know: * Who wants them * What they’re worth * Where they fit Hiring shouldn’t feel like rejection roulette. It should feel like an intelligent market reconnecting human supply and demand. Final Takeaway Hiring...

Duration:00:47:43

Ask host to enable sharing for playback control

AI Education, Personalized Learning, and the Future of Work

12/19/2025
TL;DR AI literacy is becoming a baseline skill. This episode explores how organizations and individuals are actually building AI capability at work, with a focus on: * Self-directed learning and AI education at scale * Personalized learning journeys versus one-size-fits-all training * The shift from basic AI use to agentic workflows * The role of human strengths—creativity, judgment, and adaptability—in an AI-driven workplace In this episode, I’m joined by Erica Salm Rench, an AI educator and leader at Sidecar AI. Sidecar is an AI education platform and learning management system (LMS) designed to help organizations educate their employees on AI through self-directed learning. It combines structured courses, role-based learning paths, and hands-on use cases so individuals can build AI capability at their own pace while organizations raise overall AI fluency. Our conversation explores what AI education actually looks like beyond hype—how people are learning it, how organizations are rolling it out, and why understanding AI is quickly becoming a career differentiator rather than a technical specialty. AI Education Has Shifted from “What Is It?” to “How Do I Use It?” Erica explains that the conversation around AI in associations has changed dramatically over the last several years. Early on, organizations were hesitant to even talk about AI. Today, the question is no longer what is AI? but how can we use it to advance our mission, improve operations, and better serve our members? That shift brings a new challenge: helping people move from curiosity to competence in a way that feels approachable rather than overwhelming. Meeting People Where They Are One of the strongest themes in our discussion is the importance of meeting learners at their current level of comfort and knowledge. AI education isn’t one-size-fits-all. This means combining: * Foundational AI concepts * Role-specific applications (marketing, events, operations) * A growing library of real-world use cases * Ongoing updates as tools evolve The goal isn’t to turn everyone into a AI engineer—it’s to help people understand what’s possible and apply AI meaningfully in their day-to-day work. From Prompting to Agentic Work We spend time talking about the evolution from simple AI use cases—like writing emails or summarizing content—to agentic AI, where systems take action on a user’s behalf. This shift matters because it fundamentally changes how work gets done. Instead of just assisting with tasks, AI begins to: * Automate multi-step workflows * Scale work that previously required human labor * Act as a force multiplier rather than a one-off toolWe agree that while much of this is still clunky today, the direction is clear: agents are becoming a core part of how work will be organized. Personalized Learning Is the Future of Education A major insight from the episode is that personalized learning journeys will define the next phase of education—especially in fast-moving domains like AI. Erica describes how Sidecar uses AI within its learning environment to: * Act as a learning assistant * Answer questions in real time * Reinforce concepts * Help learners connect theory to application This mirrors a broader trend: education becoming less about static courses and more about continuous, adaptive support. The Psychology of Learning AI at Work We talk openly about fear—fear of job loss, fear of falling behind, fear of not being “technical enough.” Erica makes the case that leaders have a responsibility to educate their teams, not just for organizational performance, but for people’s long-term career resilience. From a psychological perspective, AI education: * Reduces anxiety by replacing uncertainty with understanding * Increases confidence and autonomy * Helps people see AI as a collaborator, not a threat Spending even 20–30 minutes a day learning AI can quickly change how people see their own future at work. Human Strengths Still...

Duration:00:44:00

Ask host to enable sharing for playback control

Jobs, Security, and Survival: Is Universal Basic Income in our Future?

11/21/2025
Conrad Shaw “So much of the labor market is driven by desperation. UBI shifts that. People can actually hold out for what they’re worth or for work that aligns with who they are.” — Conrad Shaw Conrad is perhaps the most unique guest I have had in the 5 year history of this show and he is on to talk about Universal Basic Income (UBI) , a very unique topic that is growing in exposure. For almost a decade Conrad has dedicated his life and career to furthering the cause of Universal Basic Income (UBI). In 2016 he and his wife started a documentary called Bootstraps which focuses on following families who lived through the experience of a basic income. Since then, he has: * Fundraised for and operated a nationwide basic income pilot * Filmed a multi-year docuseries currently in post-production * Co-founded Commingle, a mutual-aid platform enabling communities to self-fund their own grassroots basic income systems * Worked extensively on messaging, outreach, and public education around income, stability, and societal transformation I learned a lot from Conrad and our conversation debunked my own myths about UBI. So a really important part of this episode is the truth about what Universal Basic Income (UBI) actually is — and what it is not. What Universal Basic Income (UBI) Is — And What It Isn’t UBI is the idea that every person receives a recurring, unconditional, baseline income — a financial floor that ensures no one starts the month at zero. It is not meant to replace work or equalize everybody’s income. Instead, it shifts the starting point so people can make decisions from stability rather than desperation. What UBI is: * A stable, universal base-level income for all * A platform for economic mobility and personal freedom * A modernized, simplified social safety net * A tool for reducing the survival-based pressure in the labor market What UBI is not: * It does not eliminate jobs * It does not cap how much people can earn * It does not remove incentives to work * It is not a socialist equal-wealth system UBI reframes the labor market so people compete for work based on interest, alignment, and ability, not raw financial need. Practical Ways UBI Could Work Conrad’s work goes beyond speculation. He has spent nearly a decade building practical UBI experiments, including the national pilot documented in Bootstraps (2016) and his current role with the Income To Support All Foundation and Commingle, a new community-driven model. He explains that UBI can be implemented through several pathways—government programs, private pilots, or community-level mutual aid—but none are simple. A government-led UBI requires political will and rethinking how we allocate resources. Philanthropic pilots can demonstrate impact, but they’re temporary. Community models like Commingle allow people to pool and redistribute resources now, without waiting for legislation, but scaling them is challenging. What’s clear is that executing UBI at any level is difficult, requiring trust, infrastructure, and cultural acceptance. Yet the difficulty doesn’t diminish the need. Instead, it underscores why experimentation and new models matter. Individual Differences: Why UBI Supports People Doing What They’re Meant to Do One of the deepest connections between Conrad’s work and mine is the concept of individual differences—the idea that every person brings a unique constellation of strengths, traits, interests, and abilities that make them naturally better suited to certain kinds of work. When people are trapped in survival mode, those natural gifts often go unused. They pick jobs they can get, not jobs that reflect who they are. Freedom from this paradigm reshapes careers in ways that benefit both individuals and employers, allowing people to walk away from toxic or exploitative conditions and take jobs they genuinely care about, leading to better performance and engagement. With a secure foundation, people have the psychological and...

Duration:00:56:21

Ask host to enable sharing for playback control

You Can’t Microwave Skills Based Hiring! Here’s the Five Star Recipe!

10/27/2025
“You can’t implement skills-based hiring by flipping a switch. It’s about changing mindsets, systems, and the language your organization uses to describe talent.” -Ashley Wallvoord In this episode of Psych Tech @ Work, me and my AI co-host, Mayda Tokens, welcome fellow I/O psychologist (and LSU Tiger!) Ashley Walvoord, Senior Vice President of Talent at Verizon. We are joined by my AI co-host Mayda Tokens who continues to impress at times and but showing a tendency to be pretty boring at other times and always telling really bad jokes (I think the API to Chat-GPT 5o gets a very different sense of humor than the consumer version). I reached out to Ashley after seeing her SIOP presentation about Verizon’s skills based hiring (and organizational transformation) program. Her and her fellow presenters- Max McDaniel (Verizon) Christina-Norris Watts (J & J) Ruth Imose (J & J) Jason Frizel (Walmart) provided amazing insights into their company’s’ amazing and inspiring skills based hiring programs. The hype around skills based hiring these days makes it seem easy. But talk is cheap- and doing skills based hiring right takes a total ALL IN approach. - one that is rooted in the commitment to become a true skills based organization. Ashley has lived this life and her experience provides an awesome preview of how one of the world’s largest organizations is reimagining hiring and development through skills and AI. We are all lucky to have her on the show! Verizon’s transformation provides a rare look at how enterprise-scale companies operationalize skills-based hiring while navigating the practical realities of change management, technology integration, and workforce readiness. Summary This conversation bridges strategy and execution, offering a clear-eyed view of how a Fortune 50 company is aligning people, process, and technology around skills. Ashley shares the lessons learned from Verizon’s commitment to a multi-year, organization wide transformation. A journey with many whistlestops along the way— from defining skills frameworks to embedding them in hiring and internal mobility. Key Themes 1. Building Skills Infrastructure at ScaleAshley explains how skills-based hiring starts long before implementation — requiring shared language, governance, and validation across the enterprise. Verizon’s approach focuses on sustainability and integration rather than one-off pilots. 2. Human Oversight in an AI-Driven SystemAI plays a growing role in matching and mobility, but Ashley underscores that human judgment remains central. The goal isn’t automation for its own sake, but augmentation — using technology to help people make better, more equitable decisions. 3. Culture Change Through Data TransparencyVerizon’s success depends on building trust with employees and leaders by showing the “why” behind skills data and AI insights. Visibility into how skills are used for development and promotion helps drive adoption. 4. Enterprise Challenges and Lessons Learned Ashley shares the realities of scaling change: aligning functions, managing vendor relationships, and ensuring consistency across geographies. Her advice is practical — start small, demonstrate impact, and scale what works. 5. Future Vision for Skills and AI in Talent Ashley envisions a future where skills become the connective tissue between learning, mobility, and performance — and where AI acts as a trusted partner in enabling opportunity at every level. Takeaways * Enterprise-scale transformation requires governance, not just technology. * AI can accelerate fairness and insight, but must remain transparent and human-centered. * Data visibility is the key to cultural adoption — employees must see personal benefit. * Scaling skills frameworks demands partnership between HR, technology, and business leadership. The future of work will depend on how we align AI, human judgment, and purpose at scale. And a commitment to verifying and managing skills at scale. This...

Duration:00:46:08

Ask host to enable sharing for playback control

How to Prepare for the Future of Hiring NOW!— Lessons from Two Decades of HR Tech Research

10/9/2025
Quote: “When this all started (generative AI for the masses), the fear was ‘this is cheating.’ Now we’re flipping the conversation and saying, no — this is actually a skill set you need to develop.” -Madeline Laureno In this episode I welcome Madeline Laurano, Founder of Aptitude Research and one of the most trusted voices in HR and TA technology. With more than 20 years of research and advisory experience, Madeline’s body of work has has tracked the evolution of all things mixing hiring, business, and tech. We have known one another for a long time and are quite simpatico in our thoughts on talent acquisition, assessment, and skills based hiring. And we prove it in this show - as we discuss the ins and outs of these crazy times for HR tech, hiring, and of course- AI. So listen in and take a look into the crystal ball while staying grounded in the truth! Topics discussed and wisdom dropped include: 1. Why ATS Are Going to Become Extinct Madeline explained that ATS systems in their current form are not built for the way talent acquisition is evolving. Recruiters are frustrated because ATSs don’t support the workflows or user experience they need, and they will eventually be replaced by more dynamic, integrated platforms that actually match how hiring happens today. Hello AI! 2. What Her Research Says About Skills-Based Hiring Madeline points out that skills-based hiring is more aspirational than real for most organizations. Aptitude Research has found that companies often treat skills like the old competency models — static, outdated, and resource-intensive — or via an over reliance on AI. Both make it hard to translate into practice without validated frameworks and clean, usable data. The path fwd requires a commitment to strategy, clarity, and validation. 3. How the Fast-Moving Nature of AI Impacts HR Tech Buying Madeline notes that AI has changed how companies buy HR tech because the market is moving so quickly. In the past, companies would take years to build strategies before investing in technology, but now AI allows them to start much faster — sometimes adopting before they fully understand how to implement, which creates both opportunity and risk. Beware of AI FOMO! 4. Agentic AI and Hiring — What Will the Impact Be? She described “agentic AI” as a coming wave where AI systems won’t just provide insights but will take autonomous actions. In hiring, this could mean systems that source, screen, and even interact with candidates automatically — raising big questions about oversight, fairness, and how much decision-making organizations are comfortable handing off to machines. Get ready because the rise of autonomous hiring agents is upon us. 5. The Impact of AI on Candidate Experience Madeline stresses that AI can either improve or damage the candidate experience depending on how it’s implemented. Candidates expect personalization, transparency, and fairness, and if AI-driven processes feel opaque or impersonal, trust will erode quickly — but if designed well, AI can actually enhance communication and responsiveness. We must not villainize AI for this- there is a lot we can do enhance candidate experience and it can actually include the use of AI if done thougthfully. 6. What Will This Look Like 20 Years From Now? Looking ahead, Madeline predicts that hiring will look radically different in 20 years, with skills-based approaches fully realized and AI deeply embedded into every step of the talent lifecycle. The key difference will be that technology will finally deliver on the vision of matching people to opportunities more accurately, quickly, and fairly at scale. AMEN- let’s just make sure that people remain in charge! Check out the episode and learn about the trends from two of the best! & do yourself a favor and visit Aptitude Research’s website where you can find free access to all of their amazing research! This is a public episode. If you would like to discuss this with other subscribers or...

Duration:00:47:10

Ask host to enable sharing for playback control

AI Adoption is a Human Problem, Not a Tech Problem

9/19/2025
“Most firms that are using AI are saving two to four hours per week per employee. That’s not transformative. That’s just doing the same thing faster.” -Alexis Fink Introduction In this episode of Psych Tech @ Work, Mayda Tokens (my AI co-host) and I sit down with Alexis Fink, I-O psychologist, long-time HR tech leader at Microsoft, Intel, and Meta, longtime friend and president of The Society for Industrial-Organizational Psychology (aka SIOP)! Alexis brings decades of experience at the intersection of people, organizations, and technology to the studio, offering a holistic and integrated perspective on the opportunities and challenges of AI in the workplace that is based on reality- not pure philosophy. We challenge Mayda to hang with us as we talk about all things people, technology, and the future of work. Alexis rocks it. You be the judge of how well Mayda meets the challenge. Hint: like all AI, Mayda is still a work in progress that fails sometimes, while still feeling miraculous IMHO. I mean come on- she speaks in emoji!!! Alexis leads the charge with her take on these great highlight topics: 1. The Transformation of Knowledge Work AI is reshaping not just factory tasks, but the decision-making and knowledge roles once thought safe from automation. 2. Organizational Design in an AI EraTrue progress requires rethinking workflows so humans and machines complement each other rather than compete. 3. Data Quality and Human-Centered DesignMost raw HR data isn’t fit for AI, making richer, cleaner, and more contextual data essential for real impact. 4. Risk, Accountability, and Quality Control As AI takes on more autonomy, organizations must adapt proven quality management and governance principles to keep it accountable. 5. The Human Problem of AI AdoptionThe hardest barriers to AI adoption aren’t technical but human — fear, resistance, and behavior change. 6. Looking to 2035: The Next-Gen I-O PsychologistFuture I-Os will master AI as a partner, using simulation and immersive tools while keeping work human-centered. Conclusion Our conversation underscores a central theme: AI is not even close to perfect and we need to recognize this (Mayda’s responses to our questions are proof of AI gone whack!) AI’s future in work won’t be defined by algorithms alone, but by how organizations redesign processes, manage risk, and support people through change. For I-O psychologists, HR leaders, and technologists alike, the task ahead is clear — ensure AI is not just bolted onto old systems, but opens opportunities for true collaboration with we humans. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

Duration:00:54:02

Ask host to enable sharing for playback control

Overcoming Obstacles to AI Adoption Through Creative Play

8/28/2025
“The problem with AI adoption isn’t just technical—it’s emotional. Creativity lowers the barrier of fear, and that opens the door to skill building.” – Jimmy Lepore Hagan Newsflash! After a much needed hiatus- Psych Tech @ Work is back with a vengeance! During the break I have been heads down in my lab- experimenting and playing with AI. SHE’S ALIVE! This episode marks the debut of my self-created AI podcast co-host Mayda Tokens. It took me three weeks to make her and during this process I explored the human side of effectively collaborating with AI. Making Mayda required me to flex my creativity, critical thinking, flexibility and perseverance. My Mayda experience prepared me firsthand for a great conversation with Jimmy about creativity, AI, and the human psyche. In this episode of Psych Tech @ Work, I welcome my new friend and fellow New Orleanian Jimmy Lepore Hagan. Together we explore why creativity is the missing link in many corporate AI readiness programs — and how it can be leveraged to help individuals and teams move from fear to fluency in a rapidly transforming world. Jimmy brings his bold, experience-driven perspective to the conversation, making the case that creative courage is not a soft skill — it's a strategic asset. Together, we discuss Jimmy’s new framework for enabling AI adoption through creativity — and my addition to the delivery of his hands-on workshop designed to help HR teams, L&D leaders, and talent professionals build AI fluency through creative exploration. Summary Creative thinking isn’t just about making art — it’s about rewiring our brains to embrace ambiguity, take risks, and explore the unknown. In this episode, we discuss how cultivating creativity can de-risk the AI learning curve, helping professionals feel more confident engaging with emerging tools. In an era of automation, the ability to experiment, play, and fail safely is what separates those who adapt from those who resist. These traits are not innate — they can be developed, and doing so can radically change how individuals approach new technology. The episode also highlights a workshop experience that puts this theory into action: a fun, safe, and high-impact program designed to build creative fluency first — and then apply it to AI. This approach helps teams lower psychological barriers to AI experimentation and open the door to real skills development. Themes We Explore * Creativity as an Onramp to AI Readiness Creativity builds the core capacities — curiosity, experimentation, and comfort with failure — that directly translate to AI learning and application. * Why Psychological Safety is a Prerequisite Without a safe space to explore, innovation doesn’t happen. We talk about how to build the cultural conditions that support real experimentation with new tech. * Learning to Play (Again) Many professionals have been conditioned out of creativity. Jimmy explains how low-stakes exercises can reawaken this muscle and prepare the brain for change. * Failure as Fuel We unpack the idea that failure is not just acceptable — it’s critical for both creative and AI development. Practicing failure makes success possible. * Designing for Transformation Hear how we’re applying these concepts in a new experiential workshop, helping HR and L&D leaders guide their organizations through tech transformation with humanity and purpose. The last word Despite the hype, many organizations struggle to operationalize AI adoption. Often, the barrier isn’t technical — it’s emotional and behavioral. Employees hesitate to engage because they fear doing it wrong or looking incompetent. This episode introduces a radical but practical solution: creativity. By focusing first on human traits like courage, curiosity, and psychological safety, organizations can build a foundation for real AI fluency and sustainable innovation. I have to give a direct and shameless plug for our workshop. Our workshop — combines science, storytelling, and...

Duration:01:12:08

Ask host to enable sharing for playback control

Creativity Is the Gateway to AI Transformation

8/18/2025
My creative experience building an AI podcast co-host says it all. Hear all about it on the next episode of the Psych Tech @ Work Podcast - coming soon! AI skills are essential but daunting AI adoption is accelerating—over 70% of companies report they’re actively integrating AI tools into their workflows. But for the people expected to use those tools, it’s a different story. Most professionals say they feel unprepared or even anxious about using AI on the job. Traditional training often falls short with AI skills because it focuses on tools, not mindset. And the stakes are high: as AI becomes embedded in everyday work, careers will increasingly rely on comfort and expertise with AI. This gap and the demand for innovative strategies to close it has been top of mind for me. Good news - my fascination with AI led me to a solution! (more on this later) Creativity unlocks AI skills I recently gave a talk at a meeting of the New Orleans AI Philosopher’s group (AKA NOAI), on AI and the future of our local economy. At this event I saw a talk by Jimmy Lepore Hagan—an artist, designer and educator—who shared a fascinating approach to AI adoption that is fresh, unique, and noteworthy. Jimmy’s talk was about the value of creativity in lowering fear of AI. He demonstrated some concepts from a workshop series he has developed featuring a series of low stakes, creative exercises grounded in design thinking to help people build comfort, confidence, and curiosity when working with AI. As a workplace psychologist I immediately saw the potential for a collaboration - applying Jimmy’s hands-on educational model to my world to help people leaders solve a difficult problem. As someone who’s spent decades applying psychological science to the development and measurement of human traits in the workplace, I have experience understanding the impact of creativity on outcomes that are directly related to work performance. As I processed this stuff- I took a step back and reviewed foundational research that shaped my earlier work—this time, through the lens of AI. The connections stood out immediately. Traits like divergent thinking, cognitive flexibility, and creative self-efficacy have long been linked to performance, but they also play a critical role in how people approach new, uncertain technologies. The evidence is clear: creativity and experiential learning do more than build skills—they tap into deeply human strengths that make people more open, adaptable, and ready to thrive in the face of change. My dance with AI says it all It became pretty clear to me that a collaboration with Jimmy could really have some legs. To get the ball rolling I invited Jimmy to be a guest on my Podcast “Psych Tech @ Work”. To prepare I wanted to gain some first hand experience with using creativity to help me sharpen my AI skills. I suck at coding and the requirement to use Python for this definitely gave me some anxiety, but I knew ChatGPT could somehow have my back. Thus came the idea to challenge myself (and have some fun) building an AI podcast co-host, Mayda Tokens. Mapping out and executing a workflow to bring Mayda to life threw me plenty of curveballs. Some of ChatGPT’s more noteworthy and frustrating shenanigans included: * Multiple times ChatGPT relentlessly tried, and continually failed, to solve technical issues; but would not give up until I suggested that we were going in circles in a blind alley and maybe we should explore alternative methods. This prompt led immediately to a set of viable alternatives that would never have been explored if I hadn't decided to pull the plug. * When I backed ChatGPT into a corner I was flabbergasted when, instead of hallucinating a solution or looking for another option, it simply refused to help me. This was a head scratching result that must have exposed a ghost in the machine because its prime directive is NEVER to say NO! * As I explored different options for Mayda’s voice, my text to speech...

Duration:00:04:38

Ask host to enable sharing for playback control

Scaling AI Innovation for Hiring: Lessons from the Frontlines

5/12/2025
Guest: Christine Boyce, Global Innovation Leader at ManpowerGroup/Right Management “We have to stress-test innovation in the messiness of real-world hiring, not just ideal lab conditions.” -Christine Boyce In this episode of Psych Tech @ Work, I’m joined by my longtime friend Christine Boyce, Global Innovation Leader at ManpowerGroup/Right Management, to explore how innovation — especially around AI — is reshaping hiring and talent development at scale, and why solving for trust, transparency, and operational realities matters more than ever. Summary At the heart of this conversation is the reality that scaling AI innovation in hiring brings massive complexity. While AI offers incredible promise, solving for accuracy, fairness, and operational reality becomes exponentially harder when you're dealing with a large number of unique clients. Christine Boyce, through her work at ManpowerGroup & Right Management, operates at the intersection of these challenges every day. Unlike internal talent acquisition leaders who focus on one organization's needs, Christine must help innovate across a vast client portfolio. Each client presents different barriers — from data limitations, to ethical concerns, to regulatory pressures — and innovation must be modular, defensible, and adaptable to succeed. This vantage point gives Christine a unique, big-picture view of how AI adoption really plays out across industries and markets. We dive into the practical challenges of innovating responsibly: earning trust, scaling solutions across diverse environments, and balancing speed with fairness. Christine’s work at ManpowerGroup & Right Management highlights how innovation must be deeply disciplined if it is to achieve true scale and impact. The Core Challenge: Scaling Accuracy and Fairness At the heart of using AI for hiring lies the challenge of achieving accuracy and fairness at scale. AI’s true value isn’t just its ability to make individual decisions — it’s in processing vast amounts of data and automating judgment across thousands of candidates. However, scale magnifies both strengths and weaknesses: minor biases can grow into systemic problems, and small inefficiencies can snowball into major failures. Staffing firms like ManpowerGroup offer critical real-world lessons: * Scale forces discipline — Every AI tool must be rigorously vetted for fairness, transparency, and defensibility before deployment. * Real-world variation stresses the system for the better — Tools must flexibly adapt to diverse jobs, industries, and candidate pools. This makes the overall path of innovation better and drives great learnings across the board. * Speed must not erode trust — Productivity gains must still respect ethical standards and candidate experience. * External accountability keeps AI honest — Clients demand transparency, validation, and explainability before adoption. Real Barriers to AI Adoption: What Clients Are Facing Despite AI's potential, Christine identifies several persistent hurdles that she faces when serving her diverse slate of clients: * Resistance to Behavior Change: Even demonstrably valuable AI tools often struggle against entrenched workflows and distrust of automation. * Ethical and Trust Concerns: Clients demand AI systems that are transparent, explainable, and defensible, fearing reputational or regulatory risks. * Vendor Noise Overload: Saturation by "AI-washed" vendors makes it hard to differentiate true innovation from hype. * Mismatch Between Hype and Practical Needs: Clients need tools that solve today’s operational problems — not just futuristic visions disconnected from reality. * Fear of Creeping AI Adoption: Organizations worry about AI capabilities being embedded into systems without visibility or intentionality. * Compliance and Regulation Anxiety: Global and local regulations (like the EU AI Act or pending US laws) create urgency for proven, compliant AI solutions. * Talent Data Readiness: Without...

Duration:00:52:21

Ask host to enable sharing for playback control

Responsible AI In 2025 and Beyond – Three pillars of progress

4/15/2025
"Part of putting an AI strategy together is understanding the limitations and where unintended consequences could occur, which is why you need diversity of thought within committees created to guide AI governance and ethics." – Bob Pulver My guest for this episode is my friend in ethical/responsible AI, Bob Pulver, the founder of CognitivePath.io and host of the podcast "Elevate Your AIQ." Bob specializes in helping organizations navigate the complexities of responsible AI, from strategic adoption to effective governance practices. Bob was my guest about a year ago and in this episode he drops back in to discuss what has changed in the faced paced world of AI across three pillars of responsible AI usage. * Human-Centric AI * AI Adoption and Readiness * AI Regulation and Governance The past year’s progress explained through three pillars that are shaping ethical AI: These are the themes that we explore in our conversation and our thoughts on what has changed/evolved in the past year. 1. Human-Centric AI Change from Last Year: * Shift from compliance-driven AI towards a more holistic, human-focused perspective, emphasizing AI's potential to enhance human capabilities and fairness. Reasons for Change: * Increasing comfort level with AI and experience with the benefits that it brings to our work * Continued exploration and development of low stakes, low friction use cases * AI continues to be seen as a partner and magnifier of human capabilities What to Expect in the Next Year: * Increased experience with human machine partnerships * Increased opportunities to build superpowers * Increased adoption of human centric tools by employers 2. AI Adoption and Readiness Change from Last Year: * Organizations have moved from cautious, fragmented adoption to structured, strategic readiness and literacy initiatives. * Significant growth in AI educational resources and adoption within teams, rather than just individuals. Reasons for Change: * Improved understanding of AI's benefits and limitations, reducing fears and resistance. * Availability of targeted AI literacy programs, promoting organization-wide AI understanding and capability building. What to Expect in the Next Year: * More systematic frameworks for AI adoption across entire organizations. * Increased demand for formal AI proficiency assessments to ensure responsible and effective usage. 3. AI Regulation and Governance Change from Last Year: * Transition from broad discussions about potential regulations towards concrete legislative actions, particularly at state and international levels (e.g., EU AI Act, California laws). * Momentum to hold vendors of AI increasingly accountable for ethical AI use. Reasons for Change: * Growing awareness of risks associated with unchecked AI deployment. * Increased push to stay on the right side of AI via legislative activity at state and global levels addressing transparency, accountability, and fairness. What to Expect in the Next Year: * Implementation of stricter AI audits and compliance standards. * Clearer responsibilities for vendors and organizations regarding ethical AI practices. * Finally some concrete standards that will require fundamental changes in oversight and create messy situations. Practical Takeaways: What should I/we be doing to move the ball fwd and realize AI’s full potential while limiting collateral damage? Prioritize Human-Centric AI Design * Define Clear Use Cases: Ensure AI is solving a genuine human-centered problem rather than just introducing technology for technology’s sake. * Promote Transparency and Trust: Clearly communicate how and why AI is being used, ensuring it enhances rather than replaces human judgment and involvement. Build Robust AI Literacy and Education Programs * Develop Organizational AI Literacy: Implement structured training initiatives that educate employees about fundamental AI concepts, the practical implications of AI use, and ethical...

Duration:00:54:44

Ask host to enable sharing for playback control

The Reality of Skills-Based Hiring Rests on Three Essential Pillars- with Jason Tyszko

3/18/2025
“We have to move beyond the idea that a skills-based job description is enough—there needs to be validation, assessment, and a clear pathway for job seekers to prove their abilities.” -Jason Tyszko In this episode of Psych Tech @ Work, I sit down with Jason Tyszko, Senior Vice President of the U.S. Chamber of Commerce Foundation, to discuss what it really takes to make skills-based hiring a reality. Jason oversees the Foundation’s T3 Innovation Network, a public-private initiative aimed at creating a more equitable and inclusive job market. T-3 focuses on using digital tools to improve communication between different parts of the job market, ensuring that all learning is recognized and valued. T-3’s mission to bridge gaps between employers and workers via the advancement of skills-based hiring makes Jason one of the world’s foremost authorities on the subject. Our conversation is a must for anyone interested in understanding the REALITIES required for true skills-based hiring. Most conversations on the subject are more hype than substance, but not this one! Jason takes us deeper into the reality of what it will take to make skills based hiring more than just an empty buzzword. To ground our conversation in a dose of reality, Jason boils success with skills based hiring into these three pillars. * Interoperable Skills Data * To make skills-based hiring a reality, we need standardized, structured, and widely accepted skills data that flows seamlessly across education providers, employers, and workforce systems. * Without interoperability, skills data remains fragmented, making it difficult for employers to assess candidates meaningfully. * Employer Engagement and Adoption * Employers must align job descriptions, hiring processes, and internal mobility pathways around skills rather than degrees or traditional credentials. * Many organizations support skills-based hiring in theory but fail to implement it fully due to ingrained legacy practices. * Technology Infrastructure and Ecosystem Readiness * AI, job-matching platforms, and hiring tools must be built to recognize and evaluate skills accurately, rather than simply filtering candidates based on outdated proxies like job titles or degrees. * Systems should support skills validation, assessment, and transparent career pathways to ensure fair and effective hiring decisions. Jason explains how these pillars support and enable five critical but often overlooked elements that are essential to making skills-based hiring work: 1. Learning and Employment Records (LERs) & The LER Resume Standard * What it is: LERs are digital, verifiable records of a person’s skills, training, certifications, and work experience. Instead of relying on traditional resumes or self-reported skills, LERs allow employers to see a structured, validated record of a candidate’s capabilities. * Why it matters: Today’s hiring systems don’t talk to each other. Skills data is trapped in different platforms (learning management systems, certifications, HR software). LERs allow skills-based hiring to function at scale by ensuring a candidate’s credentials are portable and universally recognized. * LER Resume Standard: This is a newly developed resume format built to process LERs, ensuring HR tech systems can read, compare, and use skills-based data more effectively. 2. Durable Skills * What it is: Unlike technical skills (which can quickly become outdated), durable skills are long-lasting, transferable skills like critical thinking, adaptability, leadership, and collaboration. * Why it matters: Most AI-driven hiring tools over-prioritize technical skills, but durable skills are what truly drive career success. Without a way to assess and validate them, companies risk hiring for short-term needs instead of long-term potential. 3. The Interoperability Layer * What it is: A technical framework that allows skills data from different platforms to connect and work together—like an API that helps...

Duration:00:59:01

Ask host to enable sharing for playback control

Are These 4 AI Mistakes Sabotaging Your Talent Strategy?

3/12/2025
In our recent LinkedIn Live session my esteemed colleague, Neil Morelli, founder of Workplace Labs, and I present a philosophical but practical approach to the adoption of HR Tech tools. Check out the full video of the presentation attached to this post and our accompanying slides (found at the bottom of the post). Here is a quick overview of the ideas that form the foundation of the presentation. “The highest-level goal of the talent acquisition (TA) function is to ensure that an organization has the right people, in the right roles, at the right time, to drive business success.” -Chat GPT 4o & your hosts’ combined 50 years of experience Talent leaders are feeling the pressure to execute Modern hiring problems such as resource constraints, candidate scarcity and overload, the move to skills based hiring, and avoiding bias have talent leaders feeling the pressure to find fast solutions! Relieving these pressures often create a temptation to put tools before strategy. AI is a great example of this. The stakes are high, and AI offers a compelling solution- or does it? AI is complex and making decisions about it requires a strong foundation of knowledge and careful planning. In this presentation we discuss 4 common mistakes in the adoption of HR tech, with a focus on AI tools (are there any other types these days?). We discuss how a tools first mentality is often the root cause of these four common mistakes and offer guidance on how to avoid them. 1. Missing AI’s ‘creeping normality': As technology becomes more entrenched in your processes and vendors add new functionalities that are accessible, adoption often occurs with little oversight or consideration. When it comes to solving problems related to talent supply or overload, AI recruitment platforms are increasingly embedding “talent matching” functionalities that create risk without any substantial rewards. 2. Chasing Skills Without Definition or Direction: We can all agree that skills based hiring has merit. But it requires alignment on what a skill means to your organization and a holistic view of where they matter and why. Merely removing resumes from the evaluation process or adopting tools, AI or otherwise, that claim to support skills based hiring without a holistic strategy is a dead end street. 3. Failing to evaluate your firm’s culture and climate for adopting AI based tools: There is a maturity required for the successful adoption of AI based tools. Understanding your firm’s readiness for AI based tools, and ensuring that you are ready to go all in is essential. Education on, and knowledge of, AI across the entire organization is a big part of successful adoption. 4. Letting vendors dictate strategy and adoption: Most vendors do offer products that can have an impact, and their messages make it tempting to jump right in. Before biting on a shiny new object, adoption of any AI based tool should be pre-empted by a house made strategy. Vendors must be held to a standard evaluated by domain experts using a framework built on the principles of ethical and effective use of AI. At the end of the presentation we provide a case study that probably feels pretty relatable to any talent acquisition professional. Here we tell a story of how mistakes are made and provide insights to help create the awareness needed to avoid them. No one is perfect - but AI alone will not create perfection. Keeping things in perspective and a thoughtful and methodical process that is not driven by fear is essential to the successful adoption of AI technologies. Download our slides here This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

Duration:00:50:16

Ask host to enable sharing for playback control

Recruiting Tech’s Past, Present and Future- W/Jeff Taylor: OG, Founder @ Monster.com & Boomband

2/28/2025
"The hiring industry is at a breaking point—AI is putting pressure on old systems that were never designed for this level of automation." –Jeff Taylor In this episode of Psych Tech @ Work, I am joined by Jeff Taylor, serial entrepreneur and founder of Monster.com, & Boomband a revolutionary new platform that is looking to turn hiring on its ear. Few people have shaped the hiring industry as profoundly as Jeff, whose vision transformed job search from a niche experiment into an industry standard. Jeff’s journey—from building the first large-scale job board to continuously innovating in the talent acquisition space—gives him a unique perspective on where hiring technology has been and where it’s headed, making him the perfect guest to explore the next big disruptions in talent acquisition and how AI is reshaping the hiring process.. In our time together we reminisce about the story behind Monster’s memorable Superbowl ads. (Who can forget the kid saying: “I want to claw my way up to middle management!” ) and the formative impact my job at Monster (circa 2000) has had on my career. But enough about me! Our conversation explores the rapid acceleration of AI in recruiting, from automating sourcing and matching to the potential risks of AI-generated applications flooding hiring systems. Jeff happily shares his candid thoughts on why hiring technology has stagnated, how AI is creating new challenges for recruiters, and what companies must do to stay ahead in an increasingly automated hiring landscape. We also discuss the core concepts behind Boomband, Jeff’s new social hiring platform. Topics Covered: * Monster.com’s origin story and how it transformed hiring and created the “job board” industry. * The shift from traditional job search to AI-driven sourcing and candidate matching and what this means for the future of hiring. * The pros and cons of AI-generated resumes and job applications—are we heading toward an overload of unqualified applicants? * The failure of legacy hiring systems to keep up with modern job-seeker behavior. * The potential for AI to create more personalized and predictive hiring experiences and Boomband Jeff’s new venture that is focused on creating a new paradigm for hiring (again!). Takeaways: * Job boards revolutionized hiring—but they haven’t evolved fast enough. The core concept of posting jobs and waiting for applications hasn’t fundamentally changed in decades. * AI is making job search more efficient but also more chaotic. Automated resume generation and mass applications are overwhelming recruiters and breaking traditional applicant tracking systems. * Legacy hiring technology is struggling to adapt. The demand for AI-powered sourcing and skills-based hiring is exposing the limitations of old-school job posting and resume-matching platforms. * The next frontier of hiring is predictive and personalized. Jeff envisions AI-driven career pathing, real-time job market intelligence, and new ways to match candidates based on abilities, not just experience. Jeff’s perspective on AI-driven hiring, the changing nature of job search, and where hiring technology must go next makes this conversation a must-listen for anyone interested in the future of work. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

Duration:00:47:44

Ask host to enable sharing for playback control

AI’s Role in Redefining the Future of Psychometric Assessments (and Hiring)

2/14/2025
“The future of assessments is about customization at scale. AI allows us to generate and adapt assessments in real-time, making them more relevant to specific job roles.” –Ben Williams Introduction: In this episode of Psych Tech @ Work, I sit down with Ben Williams, Managing Director of Sten 10, to discuss how AI is reshaping the field of psychometric assessments and hiring processes. Our conversation dives into the evolving landscape of AI-driven assessments, the ethical considerations of using AI in hiring, and the challenges of maintaining transparency and fairness while incorporating new technologies. Ben shares insights into blending AI with traditional assessment tools and how this impacts the future of selection processes. Key Topics Covered: * The role of AI in automating and customizing assessments * Emerging challenges in trust, fairness, and explainability in AI-powered hiring * The importance of designing job-specific psychometric tools that align with organizational needs * AI's potential in generating, scoring, and validating assessments * Future implications of AI on entry-level and senior hiring roles Summary: We explore AI’s role in streamlining psychometric assessments while addressing challenges in maintaining transparency and fairness. Ben describes how Sten 10 has integrated AI to make assessment processes faster and more personalized without losing the critical human oversight needed for ethical hiring practices. We also discuss prompt engineering, AI literacy, and the limitations of AI-generated assessments. One significant takeaway is the growing importance of designing highly contextual and customized assessments using AI while ensuring they remain interpretable and meaningful. We touch on real-world examples, including how AI can generate coaching tips and personality profiles, as well as potential concerns regarding the over-reliance on AI outputs. The conversation also highlights emerging roles related to AI governance and the need for regulatory oversight to ensure fair hiring practices. Key Takeaways: * AI augments, but doesn’t replace human oversight: While AI is making assessments faster and more scalable, the need for human validation remains critical to ensuring fairness. * Custom psychometric assessments are the future: Moving beyond off-the-shelf tools, companies can develop highly specific and job-relevant assessments using AI. * Prompt engineering for assessments: Organizations can create better assessment tools by focusing on AI prompt development and optimization. * AI literacy is essential for hiring professionals: As AI becomes more embedded in hiring, HR professionals need to understand its benefits and limitations to apply it responsibly. * Trust and explainability are key: Companies must prioritize transparency to gain candidate trust and meet regulatory standards. Conclusion:AI’s role in hiring is evolving rapidly, and the opportunities for innovation are endless. However, as Ben notes, the path forward requires a careful balance between technological advances and maintaining human control. By designing psychometric tools with AI and human collaboration, organizations can achieve a fairer and more effective hiring process. Take It or Leave It? Articles: * “Ineffective Human-AI Interactions and Solutions” — Oxford Review * Summary: This article delves into the factors influencing human-AI collaboration, including cognitive load and decision control. Ben highlights how integrating AI into familiar tools like Slack and Word can reduce friction and improve adoption. * “AI and Public Perception: What Americans Really Think” — Center for Data Innovation * Summary: A survey reveals mixed feelings about AI, with curiosity decreasing and negative emotions on the rise. Ben critiques the contradictions in public attitudes toward AI and how these perceptions could shape its future adoption in hiring. This is a public episode. If you would like to discuss this with...

Duration:01:01:25

Ask host to enable sharing for playback control

What does AI know about Skills Based Hiring? Listen and find out!

1/31/2025
“We need global standards to define and verify skills, or we’ll be left with confusion and inconsistency across industries.” -Notebook LM’s Deep Dive Podcast Hosts Skills based hiring is all the rage, and so is AI. So what happens when you mix the two? In this special edition of Psych Tech @ Work, I handed the mic over to AI using Google’s Notebook LM. The result? A fully AI-generated exploration of the evolving world of skills-based hiring. But how well did AI do at covering this complex and nuanced topic? So how did it do? Listen and decide for yourself. In the meantime- Here is a short summary to pique your curiosity. Skills-based hiring promises to break down barriers and redefine how we think about qualifications, but it’s not without challenges. This episode examines how companies can move beyond traditional degree requirements and leverage diverse learning pathways. It also highlights the shift from career ladders to flexible, lattice-like models and the critical role of leadership in making these transformations happen. Key Takeaways: * AI is a tool, not the solution: Organizations need both AI-driven assessments and human judgment to effectively identify and verify skills. * Degrees aren’t everything: Employers must embrace non-traditional education pathways to access untapped talent. * Lifelong learning is essential: Workers should continuously upskill and showcase their abilities through portfolios and personal branding. * The career ladder is outdated: Flexible career paths based on transferable skills are the future. * Leadership drives change: True transformation in hiring practices requires bold decisions beyond tech implementation. Conclusion: This AI-powered episode demonstrates the potential of using AI for content creation while also showing its limitations. AI did a great job providing structure and highlighting key points, but human oversight remains essential to ensure deeper exploration and address the human factors that technology alone can’t fully capture. Skills-based hiring requires more than AI—it needs leaders willing to rethink and redesign hiring practices with empathy and inclusivity in mind. Please listen and share your thoughts on how these robots did exploring the issues and drawing meaningful conclusions! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

Duration:00:23:23

Ask host to enable sharing for playback control

Bridging Leadership, Psychological Safety, and Technology with Alison Eyring

1/10/2025
“Technology should enable human connection, not replace it.” —Alison Eyring Introduction In this episode, I’m joined by my friend Alison Eyring, an IO psychologist with decades of experience in the realm of global leadership and talent development and the founder of Produgie. I have known Alison for a very long time - in fact she was my first “real world” project sponsor back in 1994! It was a pleasure to welcome Alison to the show for a great conversation about the role of human centered design when building software to help leaders do their thing! Summary: Our conversation explores the intersection of leadership psychology and technology design. Alison shares insights on how psychological safety can be both measured and improved, emphasizing its critical role in team dynamics and organizational success. We dive into her approach to developing tools tailored to user needs, the importance of cultural agility for global leaders, and how technology can both enhance and challenge workplace trust. Throughout, Alison highlights how organizations can foster meaningful change through a combination of data, design, and human connection. Key Topics Covered: * The psychology behind software usability and human-centered design. * Measuring and improving psychological safety within teams. * The evolving role of AI in leadership and organizational development. * Using adaptive tools to support leaders in achieving greater impact. * The challenges and opportunities of cultural agility in a globalized workforce. Takeaways: * Psychological Safety: Leaders can actively improve psychological safety, a critical element for team effectiveness and engagement, by fostering trust and transparency. * Cultural Agility: Leadership in a global context requires a combination of self-awareness, competencies, and experiences to navigate cultural differences effectively. * Data-Driven Insights: Organizations can gain actionable insights from assessments and development tools to better understand leadership strengths and weaknesses. * Human-Centered Design: Building technology for HR or leadership should prioritize the user’s challenges and needs, not just the buyer’s demands. * AI in Leadership: AI can support leaders in providing feedback, fostering growth, and driving measurable outcomes, but its use must be transparent and human-supervised. Take It or Leave It Articles: * “The Homework Apocalypse” by Ethan Mollick * Summary: This article discusses how educators are grappling with AI tools used by students for coursework and the need to rethink educational approaches. Alison critiques the rapid pace of AI developments and emphasizes the importance of teaching judgment and understanding bias in AI-generated insights. * “Psychological Safety in the Workplace” * Summary: This article explores what psychological safety is, what erodes it, and how organizations can foster it. Alison highlights the timeless nature of this topic and its centrality to leadership and organizational success. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

Duration:00:52:28

Ask host to enable sharing for playback control

Deconstructing Jobs in the Age of AI: Skills, Work Redesign, and The Future of Work

12/19/2024
"AI isn’t replacing people—it's augmenting them. The people who know how to use AI will replace those who don’t: "Redesigning jobs is about understanding which tasks humans excel at and which tasks AI can handle—then finding the perfect balance." Guest: Sania Khan * Labor Economist, founder of Inflection Point Consulting, Chief Economist at a leading talent intelligence AI company, Senior Economist at the Bureau of Labor Statistics (BLS) Summary: In this episode of Psych Tech @ Work, I welcome a new friend and brilliant Labor Economist Sania Khan, whose unique perspective blends macroeconomic labor trends, AI-driven work redesign, the evolution of skills, and the future of work Sania shares insights from her experience at the Bureau of Labor Statistics and her work with emerging talent intelligence tools to tackle one of today’s hottest topics: how jobs are being fundamentally deconstructed into tasks, skills, and competencies. We dig into how AI is reshaping work—from automating routine tasks to creating new opportunities—and what this means for businesses and individuals. Sania makes the case for job redesign as an essential forward looking strategy for organizations as they adapt to the increasing role of AI in redefining the rules of work. We agree that the world of work will increasingly find itself tied to a skills based economy which will require solving the challenge of moving beyond buzzwords like “skills-based hiring” and focus on aligning emerging technologies with human potential. This will require building consistent skills taxonomies, focusing on durable skills like problem-solving and critical thinking, and the gap between hype and reality when it comes to AI’s impact on the labor market. Topics Covered: * Deconstructing Jobs with AI * How AI is automating tasks within jobs, freeing workers for more meaningful work. * The importance of job redesign to align organizational goals with evolving roles. * Skills-Based Hiring and Skills Taxonomies * Why a globally accepted definition of "skills" remains elusive and how this hinders interoperability across platforms. * The challenge of relying on resumes and job descriptions as source materials for skills analysis. * The Future of Work and AI's Impact * AI’s dual role: creating efficiencies while raising concerns about job replacement. * Predictions for future jobs—like AI specialists, prompt engineers, and responsible AI officers—and how organizations can prepare. * Durable Skills and Adaptability * Why “durable skills” like problem-solving, critical thinking, and agility will define professional success. * How workers can future-proof their careers by learning to work with AI, not against it. Takeaways: * AI is reshaping work by automating routine tasks, but humans remain critical for complex, meaningful roles. * Organizations must focus on job redesign to capitalize on AI while ensuring employees do meaningful, value-added work. * Skills-based hiring is promising but hindered by inconsistent taxonomies and unreliable data sources. * Durable skills—like critical thinking, problem-solving, and adaptability—are the key to navigating AI-driven change. * Workers who learn to augment their skills with AI will have the greatest advantage. * New roles like AI specialists, responsible AI officers, and prompt engineers will emerge as businesses adopt more advanced technologies. Articles Discussed in the "Take It or Leave It" Segment: * "Research: How GenAI is Already Impacting the Labor Market" – Harvard Business Review * Summary: A data-backed look at how generative AI is reducing demand for automation-prone gig work while increasing competition in the labor market. Sania underscores the importance of becoming exceptional at your craft to remain competitive. * "How AI Is Fueling Long-Term Job Growth" – Fast Company * Summary: A positive perspective on AI’s role in creating new jobs, like AI specialists and data scientists. Sania challenges...

Duration:01:02:27

Ask host to enable sharing for playback control

Understanding Cultural Agility in Global Work Environments

12/6/2024
"The most important competency for success in global assignments? Humility—being willing to learn how to succeed in a new cultural context." * Paula Caligiuri Paula is THE expert in this realm! In this episode, I welcome Paula Caligiuri, a renowned expert in cross-cultural psychology and global leadership and author of many books about cross-cultural adaptation and career happiness, the latest ones being: Build Your Cultural Agility: The Nine Competencies of Successful Global Professionals (2021) Live for a Living: How to Create your Career Journey to Work Happier, Not Harder (2023, co-authored with Andrew Palmer) I have known Paula for almost 30 years. Her research played an essential role in my dissertation which was on cross-cultural adaptation in expatriate work assignments. While I do not work in this area, Paula sure does! She has dedicated her career to research and practice on the psychology of cross-cultural adaptation in both the personal and professional realms. I really enjoyed the opportunity to speak with Paula about the intricacies of cultural agility, the challenges faced by individuals working internationally, and how organizations can better prepare their employees for success in diverse environments. Cultural Agility is the name of the game. Our conversation is anchored by the concept of “Cultural Agility”, a combination of awareness, competencies, and experiences that allow individuals to be effective in multicultural environments. Paula describes it as being made up of: * Awareness: Understanding one's own values and how they compare to different cultural contexts. * Competencies: The skills needed to enter a novel environment, learn it, and be effective. These include both relationship-oriented competencies (like perspective-taking, relationship-building, and humility) and personal self-oriented competencies. * Experiences: Exposure to different cultural contexts, though Paula emphasizes that experiences alone are not enough. Paula notes that cultural agility involves the ability to adapt and thrive in unfamiliar cultural settings. She emphasizes that it's not just about giving people experiences abroad, but also equipping them with the knowledge and skills to navigate cultural differences effectively. Biology is a critical factor in adaptation Probably the most interesting thing I learned from our conversation was the role hormones play in cultural agility because they can help individuals handle greater levels of novelty comfortably and effectively, and that those with higher cultural agility are often better able to adjust to more challenging cultural contexts. Did you know that- elevated cortisol levels in response to cultural unfamiliarity can impair cognitive functions, making it challenging to interpret social cues and adapt behaviors appropriately. Or that The novelty of a new culture triggers the brain's reward system, releasing dopamine, which can enhance our motivation to engage and learn in the new environment. I didn’t! Adaptation begins with undertaking activities that put our chemicals in balance! Assessment plays a central role in adaptation I am not going to pass up the opportunity to talk about assessments. Paula has taken what she has learned and created the myGiide assessment, which measures cultural value and cultural agility competencies providing users with insights into their cultural values and biases, allowing for comparative analysis with other cultures and identifying potential areas of conflict or misunderstanding. The assessment is free. I took it and found the insights it provided me super valuable. myGiide is also an example of the role technology plays in cross-cultural adjustment The impact of technology on cultural adaptation may surprise you. I went into our conversation thinking Paula would gush about how technology has made adapting to other cultures much easier. But I was wrong! * Technology is a "double-edged sword" for cultural...

Duration:00:47:50

Ask host to enable sharing for playback control

Success in Talent Acquisition = Foundation First, Tools Second- w/Linda Brenner

11/15/2024
"Hiring is broken not because of a lack of tools, but because we lack a disciplined, strategic approach. Technology only works when we have the right foundation." –Linda Brenner * In this episode of Psych Tech @ Work, I welcome my long time friend and collaborator Linda Brenner for some straight talk about the challenges facing TA leaders in the age of talent shortages, AI, and general global insanity. * This conversation serves as a roadmap for talent acquisition leaders looking to rethink their strategies, streamline their processes, and make smarter use of technology. * Linda explains why many companies struggle to attract and retain top talent despite using sophisticated AI and other technology solutions. We delve into the importance of aligning TA strategies with business goals, building clear processes, and minimizing reliance on outdated ATS systems that often hinder rather than help hiring efforts. We discuss the complexities of AI in recruitment, including video interview assessments and chatbots, and Linda highlights the need for human oversight in areas like candidate engagement and relationship building. Topics Covered: Talent Acquisition Audits: * Linda describes her process for auditing talent acquisition, from evaluating business goals to diving deep into data, processes, and technology use. * Common issues found in TA audits, including lack of alignment, undefined processes, and inconsistent use of ATS systems. AI and Video Interviews: * How AI is currently used in TA and Linda’s views on the limitations and potential pitfalls, particularly around legal considerations and candidate engagement. Skills-Based Hiring Misconceptions: * The difference between true skills-based hiring and keyword matching. * Why many organizations aren’t yet ready to execute skills-based hiring effectively due to foundational issues in their processes and technology. Takeaways: Foundation First, Tools Second: AI and advanced tools can’t solve underlying issues. Establishing clear, consistent processes aligned with business goals is essential before adding new technology. Strategy over tactics: TA leaders should build a strategy that accounts for different types of roles and aligns with company growth goals, instead of relying solely on quick fixes. Consider the Candidate Experience: Long, inefficient hiring processes lead to drop-offs and high turnover. Streamline processes with candidate engagement in mind. AI as a Support Tool, Not a Solution: Use AI to support administrative tasks and data collection but maintain human oversight, especially in high-stakes areas like interviews and candidate assessment. This epsiode’s "Take it or Leave it" Articles 1. AI-Enabled Work Ethic" by Charles Handler In this article, I explore whether generative AI is an asset or liability for job candidates and employers. We discuss the ethical considerations around candidates using AI tools in applications and how companies could structure policies to evaluate AI competency fairly. 2. The Future of Talent Acquisition and AI" from Forbes This article suggests that companies not using AI in talent acquisition will fall behind. Linda and I debate this, with Linda arguing that AI should only be implemented after TA processes are clearly defined and aligned with business objectives. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

Duration:01:02:15