Tech Transformed-logo

Tech Transformed

Technology Podcasts

Expert-driven insights and practical strategies for navigating the future of AI and emerging technologies in business. Led by an ensemble cast of expert interviewers offering in-depth analysis and practical advice to make informed decisions for your enterprise.

Location:

United Kingdom

Description:

Expert-driven insights and practical strategies for navigating the future of AI and emerging technologies in business. Led by an ensemble cast of expert interviewers offering in-depth analysis and practical advice to make informed decisions for your enterprise.

Twitter:

@EM360Tech

Language:

English

Contact:

+44 207 148 4444


Episodes
Ask host to enable sharing for playback control

Are “Vibe-Coded” Systems the Next Big Risk to Enterprise Stability?

1/22/2026
Podcast: Tech Transformed Podcast Guest: Manesh Tailor, EMEA Field CTO, New Relic Host: Shubhangi Dua, B2B Tech Journalist, EM360Tech AI-driven development has become obsessive recently, with vibe-coding becoming more common and accelerating innovation at an unprecedented rate. This, however, is also leading to a substantial increase in costly outages. Many organisations do not fully grasp the repercussions until their customers are affected. In this episode of the Tech Transformed Podcast, EM360Tech’s Podcast Producer and B2B Tech Journalist, Shubhangi Dua, spoke with Manesh Tailor, EMEA Field CTO at New Relic, about why AI-generated code, also called vibe-coding, rapid prototyping, and a focus on speed create dangerous gaps. They also talked about why full-stack observability is now crucial for operational resilience in 2026 and beyond. AI Vibe Code Prioritising Speed over Stability AI has changed how software is built. Problems are solved faster, prototypes are created in hours, and proofs-of-concept (POC) swiftly reach production. But this speed comes with drawbacks. “These prototypes, these POCs, make it to production very readily,” Tailor explained. “Because they work—and they work very quickly.” In the past, the time needed to design and implement a solution served as a natural filter. However, the barrier has now disappeared. Tailor tells Dua: “The problem occurs, the solution is quick, and these things get out into production super, super fast. Now you’ve got something that wasn’t necessarily designed well.” The outcome is that the new systems work but do not scale. They lack operational resilience and greatly increase the cognitive load on engineering teams. New Relic's research indicates that in EMEA alone: Essentially, AI-driven development heightens risks and increases blind spots. “There are unrealised problems that take longer to solve—and they occur more often,” Tailor noted. This is because many AI-generated solutions overlook operability, scaling, or long-term maintenance. Modern architectures were already complex before AI came along. Microservices, SaaS dependencies, and distributed systems scatter visibility across the stack. “We’ve got more solutions, more technology, more unknowns, all moving faster,” he tells Dua. “That’s generated more data, more noise—and more blind spots.” Traditional...

Duration:00:21:43

Ask host to enable sharing for playback control

AI in Sustainability: Frugal, Transparent, and Impactful Supply Chain Solutions

1/21/2026
In a world where climate change is reshaping the way we grow, transport, and consume the things we rely on, understanding the first mile of supply chains has never been more critical. That’s the stage where over 60 per cent of risks arise, yet it remains the hardest to measure and manage. In a recent episode of Tech Transform, Trisha Pillay sits down with Jonathan Horn, co-founder and CEO of Treefera, to explore how artificial intelligence is providing clarity, actionable insights, and sustainable solutions for this complex ecosystem. The First Mile and Climate Pressures Horn’s perspective comes from a mix of experience: growing up on a farm, studying physics, and working in investment banking. That combination gives him a lens on both the natural systems that underpin agriculture and the data-driven tools that help manage risk. Extreme weather patterns like droughts, heavy rainfall, and hurricanes are putting pressure on crops such as cocoa, coffee, wheat, and soy. The consequences ripple outward: production costs rise, commodity prices fluctuate, and supply chains become less predictable. A simple example illustrates this clearly: certain chocolate biscuits in the UK have moved from being chocolate-filled to chocolate-flavoured, reflecting disruptions in cocoa production in West Africa caused by extreme weather and disease. These changes are not isolated; they affect global markets and everyday products. Turning Data into Actionable Insights AI can help make sense of the complexity. Treefera, for instance, combines satellite imagery, sensor data, and other datasets to provide insights on crop yields, supply risks, and climate impacts. Horn describes it like a car dashboard: “You don’t need to know every technical detail to understand what’s happening and act accordingly.” The value of AI lies not in flashy algorithms but in its ability to translate raw data into practical decision-making tools. By analysing multiple signals from weather events to agricultural output, AI can highlight trends, flag potential disruptions, and support planning for traders, insurers, or supply chain managers. The goal is clarity and action, not simply more information. Data, Regulation, and Responsible Use Alongside operational complexity, organisations face questions about data governance. Emerging regulations such as the EU AI Act aim to ensure AI is used responsibly, and companies need to maintain control over proprietary information while leveraging technology effectively. Horn stresses the importance of frugal, transparent AI applications that produce meaningful insights without unnecessary complexity. In practice, this means balancing innovation with compliance: using AI to understand risks, improve planning, and support sustainability without overstating its capabilities or creating new vulnerabilities. The conversation underlines a key point: the impact of AI is most tangible when it’s applied thoughtfully, in service of real-world decisions. In short, AI is helping organisations navigate the increasingly unpredictable intersection of climate, risk, and supply chain complexity. The first mile, long a blind spot, is becoming visible not through hype or marketing claims, but through practical, data-driven insight that helps people respond to the world as it is, not as we wish it to be. Takeaways

Duration:00:26:50

Ask host to enable sharing for playback control

How Gen-AI Will Impact Mass Customisation Today and in the Future

1/20/2026
Mass customisation has long been the holy grail for industrial manufacturers, offering the ability to provide highly tailored products while maintaining efficiency, scalability, and profitability. However, as products become increasingly complex, traditional methods of managing configurations are starting to reveal their limitations. In a recent episode of Tech Transformed, host Christina Stathopoulos, Founder of Dare to Data, spoke with Stella d’Ambrumenil, Product Manager at Configit, about the operational realities and future potential of generative AI technology in manufacturing. The Challenge of Complexity Modern manufacturers often operate somewhere between make-to-order and assemble-to-order models. While these approaches allow flexibility, they also expose companies to a major problem, such as fragmented configuration processes. Sales teams, engineers, and manufacturing units may all handle different aspects of customisation separately, relying on spreadsheets or outdated product documentation. The result is inefficiency, errors, and an inability to scale effectively. “The problem isn’t just that you have lots of options,” Stella explains. “It’s that the knowledge about those options is scattered. If configuration is handled differently across departments, you inevitably get mistakes and lost time.” Configit Ace® Prompt: Bridging the Gap Enter Configit Ace® Prompt, the latest tool designed to tackle this very problem. At its core, Configit Ace® Prompt converts unstructured data into structured configuration logic that can be used across all departments. Formalising configuration knowledge ensures that customisation is accurate, repeatable, and manageable. This approach not only reduces errors but also democratizes access to critical product information. Engineers, product managers, and sales teams no longer need to interpret fragmented data manually — they can work from a single source of truth. Early adopters report significant time savings, fewer mistakes, and smoother collaboration. Why Configuration Lifecycle Management Matters Configit Ace® Prompt is a key enabler of Configuration Lifecycle Management (CLM). CLM is an approach to maintaining consistent data and processes across the entire product lifecycle — from design and engineering to manufacturing and service. This is crucial for companies seeking to scale customisation without creating chaos in operations. By adding generative AI technology, manufacturers can implement a CLM approach faster to automate logic creation, catch configuration errors early, and ensure that complex products are delivered efficiently. Looking Ahead: CLM Summit 2026 For professionals interested in deepening their understanding of configuration management, Configit’s CLM Summit 2026 — an online event scheduled for May 6 & 7 - will provide insights into best practices, advanced strategies, and tools like Configit Ace® Prompt. It’s an opportunity to see how companies can leverage configuration management to stay competitive in a world of growing product complexity. For more insights, visit: configit.com Takeaways

Duration:00:29:06

Ask host to enable sharing for playback control

AI-Ready Employees: How Skills-First Training Drives Business Impact

1/14/2026
As organisations navigate the rapid rise of AI, the challenge is no longer simply acquiring technology; it’s preparing people to use it effectively. Many companies are realising that access to AI tools alone doesn’t translate into business impact. Employees need meaningful opportunities to develop skills that can be applied immediately, helping teams work smarter and make better decisions. In this episode of Tech Transformed, Christina Stathopoulos, Founder of Dare to Data, speaks with Gary Eimerman, Chief Learning Officer at Multiverse, about the pressing challenge of closing the AI and data skills gap in the workforce. They explore how organisations can build an AI-ready workforce, focusing on non-technical employees and the importance of a skills-first approach to learning. The Skills-First Approach Multiverse champions a skills-first approach to upskilling employees in AI and data, asserting that this targeted training drives measurable business impact, including increased productivity, revenue growth, and time savings. This strategy moves beyond general AI literacy to focus on practical, applied learning. By diagnosing both organisational needs and individual skill levels, the approach identifies gaps and prescribes tailored, project-based learning experiences. Employees don’t just complete modules in isolation; they work on real-world projects that apply the skills they are learning from day one, reinforcing retention and ensuring that training contributes to tangible outcomes. Learning in the AI Era Gary explains that learning in the AI era is not simply about providing tools or access to content; it’s about driving behaviour change, aligning learning with business outcomes, and embedding a culture of continuous skill development. As AI reshapes both the work we do and the way we learn, organisations that invest in people-first strategies position themselves to thrive rather than merely adapt. This conversation demonstrates that the future of work is always on learning, and that meaningful investment in AI and data skills is no longer optional; it’s a critical driver of business success. Unlocking Workforce Potential By combining practical, applied training with ongoing support and measurable outcomes, companies can not only close the AI skills gap but also unlock the full potential of their workforce in an era defined by rapid technological change. Takeaways Chapters 00:00 Closing the AI and Data Skills Gap 02:02 Challenges in Building an AI-Ready Workforce 06:06 The Skills First Approach to Learning 10:04 Supporting Non-Technical Employees in AI 13:46 Measuring the Impact of AI Skills...

Duration:00:26:14

Ask host to enable sharing for playback control

Automotive Communication Best Practices: Trust, Privacy, and Compliance

1/14/2026
In the automotive industry, trust and transparency are no longer optional; they have become key components. Dealerships that communicate clearly and responsibly with their customers strengthen relationships and improve overall experiences. In this episode of Tech Transformed, host Trisha Pillay speaks with Sean Barrett, Chief Information Officer at CallRevu, about how dealerships can navigate the changing landscape of communication while maintaining accountability, compliance and operational resilience. The Evolution of Dealership Communication Communication has always been at the heart of dealership operations. The phone system was once the primary lifeline between customers and dealerships, giving managers the visibility needed to ensure interactions were handled correctly. Today, communication extends far beyond the phone. SMS, MMS, instant messaging, and other channels allow customers to engage in multiple ways. Sean explains how integrating these channels into a single technology platform provides managers with a clear view of all interactions, ensuring employees follow policies and customers receive the attention they deserve. This approach strengthens trust and improves the overall customer experience. Compliance and Data Privacy in Automotive Communication Alongside multi-channel communication, compliance and data privacy are critical. Regulations like GDPR and UN R155 require dealerships to protect customer data while maintaining seamless communication. Transparent practices, combined with adherence to regional rules, help build trust and protect both customers and the dealership’s reputation. Observing patterns in customer interactions also allows dealerships to make informed decisions, improve processes, and enhance service quality. Using these data insights, dealerships can make communication more effective and meaningful for every customer. Infrastructure That Keeps Dealerships Operational Reliable infrastructure underpins all communication efforts. Sean shares how dealerships can prepare for unexpected disruptions with geo-redundant systems, cloud-based platforms, and layered internet backups, including options like Starlink or fibre connections. These measures ensure dealerships stay operational, customers can reach them without interruption, and business continuity is maintained. Preparing for Emerging Communication Channels As new channels emerge, proactive preparation is key. Dealerships that view communication as an investment, rather than a cost, position themselves for long-term success. Monitoring trends, adapting quickly, and fostering transparency help maintain strong customer relationships even as expectations evolve. Training and Staff Development Staff development is a critical component of a communication strategy. By using insights from technology platforms, dealerships can guide employee training, build accountability, and create a culture of learning. Confident, well-trained teams contribute to consistent, high-quality interactions that enhance customer trust. Success in automotive communication isn’t just about adopting the latest tools—it’s about building systems and practices that protect customers, support employees, and foster trust at every touchpoint. Sean Barrett’s insights provide a roadmap for dealerships aiming to elevate communication strategies, improve customer satisfaction, and

Duration:00:20:45

Ask host to enable sharing for playback control

From Monolithic to Composable: A New Era in CDPs

1/5/2026
In a world where customer expectations evolve faster than ever, organisations are rethinking how they manage and leverage data. Legacy, monolithic Customer Data Platforms (CDPs) are increasingly challenged by rigidity, slow adaptability, and regulatory pressures. In this episode of Tech Transformed, Christina Stathopoulos, Founder of Dare to Data, speaks with Joe Pulickal, Director of Product Management at Uniphore, about the shift to composable CDPs and what it means for modern marketing technology. Moving Away from Monolithic CDPs Organisations are moving away from rigid, all-in-one CDPs as regulations around data privacy, consent, and cross-border data flows intensify. Joe explains that companies can no longer rely on systems that lock them into a single architecture or make compliance retrofitting difficult. Data governance, consent management, and data sovereignty have become critical considerations in every technology decision, forcing leaders to rethink the underlying structure of their CDPs. Challenges in Composable Systems While composable CDPs offer flexibility, they introduce new challenges. Organisations must define ownership and accountability within modular systems to prevent fragmentation and ensure consistent data quality. Leadership must consider how compute, storage, and access are distributed across modules while maintaining compliance and security standards. Joe notes that without clarity on ownership, organisations risk operational inefficiency and weakened governance. Flexibility and Modularity in Data Management The core advantage of composable architectures lies in modularity. By decoupling components from data ingestion to activation, organisations gain the freedom to innovate without being constrained by a monolithic platform. Joe emphasises: “You need flexibility in where data lives, how compute happens, ultimately doubling down on sovereignty, security, and that composable idea that initially started with data.” This approach allows teams to adopt new tools, scale selectively, and respond to changing business or regulatory requirements with agility. Embracing First-Party Data Strategies The shift to first-party data strategies is essential in today’s marketing landscape. With third-party cookies being phased out and privacy regulations tightening, companies must rely on direct, trusted data from their customers. Composable CDPs provide the framework to centralise first-party data while giving teams the ability to personalise experiences, maintain compliance, and safeguard trust. Joe highlights that organisations need to view data not just as an asset, but as a responsibility, balancing customer value with ethical management. Here are what leaders can do: This episode offers practical insights for leaders navigating the transition from traditional CDPs to composable architectures. It highlights how thoughtful design, governance, and first-party data strategies empower organisations to act with agility, comply with regulations, and...

Duration:00:28:52

Ask host to enable sharing for playback control

What Should Contact Centres Do First to Prepare for Agentic AI?

12/9/2025
As companies rethink how they provide customer experiences (CX), a new form of AI capability, agentic AI, is quickly changing how work is accomplished in contact centres. In the recent episode of the Tech Transformed podcast, Dialpad Lead Product Manager Calvin Hohener sits down with host Jon Arnold, Principal at J Arnold & Associates. They discuss the transition from legacy chatbots to more autonomous agents capable of completing tasks and improving customer interactions. The conversation highlights the importance of understanding the technology's impact on enterprise architecture, the need for clean data, and the strategic implications for C-level executives. Hohener emphasises the importance of starting with clear use cases and working closely with vendors to maximise the potential of AI in business operations. From Legacy Chatbots to Agentic AI Most people have used chatbots and found them lacking. Hohener explains why: earlier conversational AI was based on retrieval-augmented generation (RAG). These systems could take user input, search a knowledge base or the internet, and provide an answer. This was helpful for customer service queries, but limited. “Previous AI models could retrieve and return information, but now we’re moving into a new phase with agentic AI.” Agentic AI can take action rather than just providing information. For AI agents to succeed, organisations must first organise their data. “How your internal knowledge is structured is crucial. Even if the data is unorganised, you need to know its location and ensure it’s clean,” stated Hohener. Agentic systems depend on internal knowledge, including knowledge base articles, CRM notes, and process documentation. If this foundation is disordered, the agent’s output will not be reliable. This isn’t about achieving ideal data cleanliness from the start; it’s about knowing what information exists, where it is, and whether it can be trusted. If an AI agent bases its decisions on outdated, conflicting, or incomplete content, it will struggle to perform tasks aptly, regardless of how sophisticated the model is. Enterprises need at least basic clarity about which systems hold which knowledge, who is responsible for them, and whether there is consistency across sources. Hohener noted that organisations often overlook how quickly conflicting information can undermine an otherwise well-designed agent. A single outdated procedure or mismatched policy in a knowledge repository can lead an AI to produce incorrect results or halt during workflow execution. Keeping internal content clean, deduplicated, and consistent gives the agent a reliable, valid source. This reliability becomes crucial when AI starts taking meaningful actions, not just providing answers. By focusing on data readiness early, enterprises not only reduce deployment obstacles but also set the stage for scaling agentic AI across more complex processes. In many ways, preparing data isn’t just a technical task; it’s an organisational one. How Human Agents Work with AI Agents? The Dialpad Lead Product Manager noted that human roles, too, will evolve with agentic AI entering the contact centre. For instance, human agents will take on more of an advisory role—reviewing conversation traces and helping adjust the models.” Instead of...

Duration:00:24:50

Ask host to enable sharing for playback control

Breaking Free from Busywork: AI and the Future of Profitable Client Delivery

12/8/2025
Client service teams are at a breaking point. Margins are shrinking, the demand keeps rising, and much of the day is consumed by work that doesn’t move the needle. As a result, skilled people often spend hours reconciling spreadsheets, re-entering the same data across multiple systems, and chasing updates, time that should be spent on the work clients actually pay for. Every hour lost to manual admin is an hour of revenue slipping away. In this day and age, that’s a hit no business can afford. AI isn’t just a buzzword here; it’s a practical lever. It can cut through the repetitive tasks that slow teams down, surface the information they need instantly, and free them to focus on high-value work. The companies winning aren’t replacing staff; they’re removing the obstacles that keep people from doing their best. In a world where speed and accuracy matter more than ever, ignoring that shift isn’t optional. In the latest episode of Tech Transformed, hosted by Christina Stathopolus, founder of Dare to Data, Daniel Mackey, CEO of Teamwork.com, discussed how AI is reshaping the daily operations of client service teams. From automating repetitive admin tasks to surfacing critical information faster, AI is giving teams the bandwidth to focus on the work that truly drives value for clients. AI and Business Transformation in Practice During the conversation, Mackey highlighted how AI is reshaping business operations, emphasising efficiency and productivity rather than job displacement. “AI has transformed our company,” he noted, pointing to tangible improvements across workflow and project management. Teams are now able to focus on strategic initiatives, leaving repetitive tasks to intelligent systems. The Teamwork.com CEO also shared a recent example from a government agency that integrated AI into its processes. By automating routine administrative work, the agency experienced better resource allocation and improved project outcomes. “They’re more efficient, higher quality,” Mackey said. “AI allows them to focus on the bigger parts of the business.” Rethinking Productivity and Client Delivery One of the challenges in the industry is that most AI features are added onto existing tools that weren’t designed for client services. Mackey discussed how TeamworkAI addresses this gap. Built into a platform designed specifically for managing client services end-to-end, TeamworkAI connects projects, people, and profits in one system. By integrating AI directly into client delivery workflows, organisations can streamline project management, reduce manual reporting, and ensure that technology enhances rather than disrupts service delivery. This approach allows businesses to use technology strategically, rather than simply automating isolated tasks. Technology and the Future of Work The discussion also touched on the broader impact of AI on traditional business models. Organisations that adopt AI thoughtfully can improve their internal processes, freeing employees from repetitive tasks and enabling them to contribute to higher-value projects. Mackey emphasised that the goal isn’t just automation, it’s profitable client delivery. AI can unlock both time and insight, allowing businesses to prioritise the most impactful work. AI is redefining how businesses allocate resources, manage projects, and deliver value to clients. By eliminating repetitive work and connecting projects,...

Duration:00:24:38

Ask host to enable sharing for playback control

How Is Generative AI Transforming Customer Experience Today?

12/4/2025
With the rapid evolution of Generative AI, customer experience (CX) is evolving rapidly, too. In a recent episode of the Tech Transformed podcast, Mike Gozzo, Chief Product and Technology Officer at Ada, sat down with host Christina Stathopoulos, Founder of Dare to Data. They talked about how generative AI is changing business-to-customer interactions. “I view it not just as a business opportunity, but we are here to solve a problem that has existed as long as commerce has,” Gozzo said. He emphasised that AI's goal isn’t just efficiency. It is about building trust and clearly understanding customer needs to allow productive interactions. Artificial intelligence, he noted, “has really enabled what used to be much more costly to happen at scale.” The Ada Chief Product and Technology Officer pointed out that the best customer experiences are highly personalised. Comparing it to arriving at a luxury hotel where the staff already knows your name, even on your first visit. He noted that modern AI aims to make such experiences, which were once only for a select few, common for everyone. Looking to the future, Gozzo tells Stathopoulos he believes generative AI will foster more engagement between customers and brands. “If I consider the trend, I think we will have much more natural, personalised, and effortless interactions than ever before because of this technology.” Gen AI’s impact on Customer Data When discussing operational challenges, especially regarding customer data management, the guest speaker stressed quality over quantity. Gozzo explained that in most AI set-ups, “the real value lies not in the data you’ve collected, but in the understanding of how your business runs, operates, and the people doing the tasks you want to automate.” Governance, Human Orchestration & the Future of AI Beyond personalisation, AI should be implemented responsibly and monitored closely. “The first thing with any AI deployment is to avoid thinking of it as software you buy, deploy, and forget. They need ongoing monitoring, engagement, and maintenance,” Gozzo tells Stathopoulos. He suggested thorough testing processes and collaboration with specialised companies like AIUC, which verify AI systems against common risks. “These tests need to happen quarterly or yearly because the underlying models change so rapidly,” he added. In addition to regularly conducting AI checks, the human element is also critical. AI might automate up to 80% of routine tasks, but humans will still play a vital role. Gozzo described the human role as that of an orchestrator, managing teams that include both humans and AI systems and effectively delegating tasks between them. Finally, Gozzo talked about AI's immediate impact on customer experience. “Our leading customers’ AI agents are outperforming humans. They deliver higher-quality customer service experiences, and customers prefer interacting with their AI.” The key measure, he said, is the positive effect on business growth and customer lifetime value. The chief technology officer’s parting advice to IT decision makers is: “The people on your team know how to make AI work. Capture their insights. Don’t treat this as a technology project. The technologist will not dominate the next decade. This is about business leaders and experts doing the heavy lifting.” At the core of generative and agentic AI, Gozzo...

Duration:00:22:24

Ask host to enable sharing for playback control

The 3G Sunset Worldwide: How Enterprises Can Avoid Device Disruption

12/3/2025
The era of 3G is ending. For many industrial businesses, smart infrastructure systems, remote device management, and IoT connectivity rely on networks that are now being phased out globally. The question isn’t if—but when your operations could be disrupted. In this episode of Tech Transformed, Trisha Pillay speaks with Jana Vidis, Business Development Manager at IFB, about the worldwide 3G sunset, what it means for enterprises, and how proactive planning can prevent costly disruptions. They explore the reasons behind the transition to 4G and 5G, the impact on various industries, and the strategies organisations can implement to assess their reliance on legacy devices. Why the 3G Sunset Matters 3G networks have powered connectivity for decades, offering wide coverage and reliability. But as global operators move to 4G and 5G, maintaining 3G is no longer sustainable. Carriers are discontinuing services, and support is dwindling, leaving legacy devices vulnerable to: Jana emphasises: “Have a good understanding of what devices you have. Work with IT partners to prepare for future changes. Plan your transition and act before disruption hits.” Jana stressed the importance of understanding current technology deployments, planning for transitions, and future-proofing investments to avoid disruptions. The conversation highlights the need for proactive measures in adapting to technological advancements and ensuring operational continuity. A Global Timeline The transition is already well underway across multiple regions: North America:Europe:Asia:Africa:South America:Middle East: Industrial devices still using 3G must transition now to avoid operational disruption. From smart infrastructure to remote IoT systems, legacy devices left unaddressed can cause downtime, inconsistent performance, and increased security risks. Takeaways

Duration:00:18:23

Ask host to enable sharing for playback control

Why Do Most ‘Full-Stack Observability’ Tools Miss the Network?

11/25/2025
Tech leaders are often led to believe that they have “full-stack observability.” The MELT framework—metrics, events, logs, and traces—became the industry standard for visibility. However, Robert Cowart, CEO and Co-Founder of ElastiFlow, believes that this MELT framework leaves a critical gap. In the latest episode of the Tech Transformed podcast, host Dana Gardner, President and Principal Analyst at Interabor Solutions, sits down with Cowart to discuss network observability and its vitality in achieving full-stack observability. The speakers discuss the limitations of legacy observability tools that focus on MELT and how this leaves a significant and dangerous blind spot. Cowart emphasises the need for teams to integrate network data enriched with application context to enhance troubleshooting and security measures. What’s Beyond MELT? Cowart explains that when it comes to the MELT framework, meaning “metrics, events, logs, and traces, think about the things that are being monitored or observed with that information. This is alluded to servers and applications. “Organisations need to understand their compute infrastructure and the applications they are running on. All of those servers are connected to networks, and those applications communicate over the networks, and users consume those services again over the network,” he added. “What we see among our growing customer base is that there's a real gap in the full-stack story that has been told in the market for the last 10 years, and that is the network.” The lack of insights results in a constant blind spot that delays problem-solving, hides user-experience issues, and leaves organizations vulnerable to security threats. Cowart notes that while performance monitoring tools can identify when an application call to a database is slow, they often don’t explain why. “Was the database slow, or was the network path between them rerouted and causing delays?” he questions. “If you don’t see the network, you can’t find the root cause.” The outcome is longer troubleshooting cycles, isolated operations teams, and an expensive “blame game” among DevOps, NetOps, and SecOps. Elastiflow’s approaches it differently. They focus on observability to network connectivity—understanding who is communicating with whom and how that communication behaves. This data not only speeds up performance insights but also acts as a “motion detector” within the organization. Monitoring east-west, north-south, and cloud VPC flow logs helps organizations spot unusual patterns that indicate internal threats or compromised systems used for launching external attacks. “Security teams are often good at defending the perimeter,” Cowart says. “But once something gets inside, visibility fades. Connectivity data fills that gap.” Isolated Monitoring to Unified Experience Cowart believes that observability can’t just be about green lights...

Duration:00:24:06

Ask host to enable sharing for playback control

How HashiCorp and Red Hat are preparing enterprises for AI at scale

11/25/2025
Enterprises are discovering that the first wave of cloud adoption didn’t simplify operations. It created flexibility, but it also introduced fragmentation, rising costs, and skills gaps that now make AI adoption harder to manage. In this episode of Tech Transformed, analyst and host Dana Gardner speaks with two leaders from across the IBM portfolio: Maria Bracho, CTO for the Americas at Red Hat, and Tyler Lynch, Field CTO for the HashiCorp product suite. They discuss how organisations can move from scattered cloud operations to a unified, automated model that supports AI securely and at scale. The conversation covers the pressures leaders face today, the role of automation, and the skills and operating model changes required as AI becomes core to enterprise strategy. What you’ll learn Key insights from the discussion Cloud complexity is accelerating Most organisations now run “a sprawl of tool sets and environments,” Bracho notes, often without the people or standardized processes to manage them. While cloud created opportunities, the operational overhead has increased. AI raises the stakes Training, tuning, and inference often run in different environments, each with separate performance and security requirements. Bracho describes AI as “the killer workload,” reinforcing the need for robust hybrid architectures. Skills gaps slow progress Lynch highlights the disconnect between AI teams and production engineering teams. Without alignment, model deployment becomes slow and risky — echoing findings from the HashiCorp 2025 Cloud Complexity Report, where most organizations say platform and security teams are not working in sync. AI exposes underlying weaknesses “AI is not going to solve complexity; it will amplify what you already have,” Bracho says. But with structured processes and automation, AI can reduce operator workload and help teams adopt best practices faster. Automation is becoming essential The Cloud Complexity Report shows that more than half of enterprises see automation as key to unlocking cloud innovation. With the foundations already laid, AI can accelerate progress by improving consistency and reducing manual effort. Modernization is continuous Both guests emphasise that AI success depends on long-term investment in people, operating rhythms, and security. Consulting can help organizations start strong, but lasting results come from internal alignment and disciplined execution. Episode chapters 00:00 Navigating cloud complexity 08:11 Skills and operating model challenges 15:13 Automation for cloud and AI productivity 21:48 How consulting accelerates AI readiness 24:10 Final guidance for CIOs About...

Duration:00:25:05

Ask host to enable sharing for playback control

AI-Powered Chip Design: Real World Impact Across Silicon to Systems

11/18/2025
The semiconductor industry is at an inflection point. As systems become more intelligent, connected, and software-defined, chip design is growing too complex for humans alone. Advances in electronic design automation are reshaping how silicon is built and verified, enabling faster, smarter, and more reliable innovation from data centers to edge devices. How AI Is Changing EDA and Chip Design In the latest episode of Tech Transformed, host John Santaferraro speaks with Dr. Thomas Andersen, Vice President of AI and Silicon Innovation at Synopsys, about the real-world impact of AI in chip design. Together, they explore how AI and automation are redefining EDA, how generative AI is accelerating design efficiency, and what the Synopsys acquisition of Ansys means for the future of simulation and system-level integration. As Dr. Andersen explains, “AI is transforming EDA. Synopsys leads in silicon design, and the Ansys acquisition expands our capabilities across multiphysics simulation and system optimization.” From Silicon to Systems The integration of complex hardware and software has become one of the greatest challenges in semiconductor and OEM innovation. Traditional sequential development, where software waits for hardware, often causes delays and missed targets. Advances in EDA tools and virtual prototyping now enable engineers to initiate software design months before silicon is finalised, thereby accelerating bring-up and enhancing collaboration across the supply chain. “Generative AI enables more efficient design,” says Andersen. “AI reshapes engineering workflows, but human expertise remains essential.” The result is faster time-to-market, enhanced design verification, and greater overall system reliability. Listen to the full conversation on the Tech Transformed podcast to discover how Synopsys is advancing electronic design automation, improving engineering workflows and chip design from silicon to systems. For more insights follow Synopsys: @Synopsys@synopsyslifehttps://www.facebook.com/Synopsys/ https://www.linkedin.com/company/synopsys/ Takeaways

Duration:00:22:53

Ask host to enable sharing for playback control

Driving Enterprise Innovation with AI and Strong CI/CD Foundations

11/13/2025
Driving Enterprise Innovation with AI and Strong CI/CD Foundations As enterprises push to deliver software faster and more efficiently, continuous integration and continuous delivery (CI/CD) pipelines have become central to modern engineering. With increasing complexity in builds, tools, and environments, the challenge is no longer just speed, but it’s also about maintaining flow, consistency, and confidence in every release. In this episode of Tech Transformed, host Dana Gardner joins Arpad Kun, VP of Engineering and Infrastructure at Bitrise, to explore how solid CI/CD foundations can drive innovation and enable enterprises to harness AI in more practical, impactful ways. Drawing on findings from the Bitrise Mobile DevOps Insights Report, Kuhn shares how teams are optimising mobile delivery pipelines to accelerate development and support intelligent automation at scale. Complexity of Continuous Integration “Continuous integration pipelines are becoming more complex,” says Kuhn. “Build times are decreasing despite increasing complexity.” Faster compute and caching solutions are helping offset these pressures, but only when integrated into a cohesive CI/CD platform that can handle the rising demands of modern software delivery. A mature CI/CD environment creates stability and predictability. When developers trust their pipelines, they iterate faster and with less friction. As Kuhn notes, “A robust CI/CD platform reduces anxiety around releases.” Frequent, smaller iterations deliver faster feedback, shorten release cycles, and often improve app ratings—especially in the fast-paced world of mobile and cross-platform development. AI Ambitions with Engineering Reality It’s easy to become swept up in the potential of AI without considering whether existing foundations can support it. Many development environments are not yet equipped to handle the iterative, data-intensive nature of AI-powered software engineering. Without scalable CI/CD pipelines, teams risk encountering bottlenecks that can cancel out the potential benefits of AI. To truly drive innovation, enterprises must align their AI ambitions with robust automation, strong observability, and disciplined engineering practices. A well-designed CI/CD platform allows teams to integrate AI responsibly, accelerating testing, improving deployment accuracy, and maintaining agility even as complexity grows. Takeaways For more insights, follow Bitrise: X: @bitrise Instagram: @bitrise.io Facebook:

Duration:00:25:02

Ask host to enable sharing for playback control

From Cost-Cutting to Competitive Edge: The Strategic Role of Observability in AI-Driven Business

11/12/2025
For years, observability sat quietly in the background of enterprise technology, an operational tool for engineers, something to keep the lights on and costs down. As systems became more intelligent and automated, observability has stepped into a far more strategic role. It now acts as the connective tissue between business intent and technical execution, helping organizations understand not only what is happening inside their systems, but why it’s happening and what it means. This shift forms the core of a recent Tech Transformed podcast episode between host Dana Gardner and Pejman Tabassomi, Field CTO for EMEA at Datadog. Together, they explore how observability has changed into what Tabassomi calls the “nervous system of AI”, a framework that allows enterprises to translate complexity into clarity and automation into measurable outcomes. Building AI Literacy AI models make decisions that can affect everything from customer experiences to financial forecasting. It's important to understand that without observability, those decisions remain obscure. “Visibility into how models behave is crucial,” Tabassomi notes. True observability allows teams to see beyond outputs and into the reasoning of their systems, even if a model is drifting, automation is adapting effectively, and results align with strategic goals. This transparency builds trust. It also ensures accountability, giving organizations the confidence to scale AI responsibly without losing sight of the outcomes that matter most. Observability Observability is not merely about monitoring; it is about decision-making. It provides the insight required to manage complex systems, optimize outcomes, and act with agility. For organizations relying on AI and automation, observability becomes the differentiator between being merely efficient and achieving a sustainable competitive edge. In short, observability is no longer optional; it is central to translating technology into strategy and strategy into advantage. For more insights follow Datadog: @datadoghq@datadoghqfacebook.com/datadoghqfacebook.comlinkedin.com/company/datadog Takeaways

Duration:00:26:48

Ask host to enable sharing for playback control

Can AI Tools Actually Prevent Burnout — or Are They Making It Worse?

11/6/2025
““Without healthy employees, you don’t have healthy customers. And without healthy customers, you don’t have a healthy bottom line.” — Kate Visconti, Founder and CEO, Five to Flow. While artificial intelligence (AI) has hastened development and made enterprises more efficient, it also comes with more deadlines. The deadlines often merge into after-hours messaging. Burnout has become a default result of productivity, especially in the tech industry. In this episode of the Tech Transformed podcast, Shubhangi Dua, Podcast Host, Producer and B2B Tech Journalist, speaks with Kate Visconti, Founder and CEO of Five to Flow, about the critical issues of burnout and disengagement in the workplace. They discuss the five core elements of change management, the financial implications of employee wellness, and strategies for enhancing productivity through flow optimisation. Also Watch: Fixing the Gender Gap in STEM The Wellness Wave Diagnostic to Help Fix Profit Leaks Visconti stresses the importance of creating a supportive work environment and implementing effective change management practices to improve organisational performance. The conversation also highlights the role of technology in productivity and the need for leaders to prioritise employee well-being to drive business success. With an ambition to change the way organisations define true performance, VIsconti developed a system – a data-driven framework called The Wellness Wave. As per the official Five to Flow website, The Wellness Wave is “a proprietary diagnostic that measures sentiment and business performance across five core elements.” Visconti sheds light on the original framework of the company. She says, “The original was adopted when we first kicked off as part of our consulting, and it's called the Wellness Wave diagnostic. It’s literally looking across the five core elements — people, culture, process, technology, and analytics.” This framework helps companies identify and fix their profit leaks, which are the hidden financial losses caused by employee burnout, disengagement, and distraction. In her conversation with Dua, host of the Tech Transformed podcast episode, Visconti shares how understanding human behaviour can lead to significant improvements in business performance. According to Five to Flow’s global diagnostics, only 13 per cent of flow triggers work at their best. For tech leaders, that means most teams are functioning well below their potential. Kate’s top tip is to create flow blocks. “It’s about designing uninterrupted time for peak focus. This is when your brain isn’t in a stress state. For me, it’s mornings with my coffee. For others, it might be in the afternoon. Communicate those times to your team and protect them like meetings.” These flow blocks aren’t just productivity tricks; they show that focus is more important than frantic multitasking. “Multitasking is a fallacy,” Kate says. “You’re just rapidly switching tasks and burning through mental...

Duration:00:33:10

Ask host to enable sharing for playback control

Beyond the Hyperscalers: Building Cyber Resilience on Independent Infrastructure

11/3/2025
“Cyber resilience isn’t just about protection, it’s about preparation.” Every business in this day and age lives in the cloud. Our operations, data, and collaboration tools are powered by servers located invisibly around the world. But here’s the question we often overlook: what happens when the cloud falters? In this episode of Tech Transformed, Trisha Pillay sits down with Jan Ursi, Vice President of Global Channels at Keepit, to uncover the real meaning of cyber resilience in a cloud-first world. Are you putting all your trust in hyperscale cloud providers? Think again. Trisha and Jan explore why relying solely on giants like Microsoft or Amazon can put your data at risk and how independent infrastructure gives organisations control, faster recovery, and true digital sovereignty. Takeaways: Chapters: 00:00 – Introduction to Cyber Resilience and Cloud Strategy 05:00 – The Importance of Independent Infrastructure 10:00 – Shared Responsibility and Misconceptions 15:00 – Digital Sovereignty and Compliance 20:00 – Practical Tips for CISOs and CIOs 22:00 – Conclusion About Jan Ursi: Jan Ursi leads Keepit’s global partnerships, helping organisations embrace the AI-powered cyber resilience era. Keepit is the world’s only independent cloud dedicated to SaaS data protection, security, and recovery. Jan has previously built and scaled businesses at Rubrik, UiPath, Nutanix, Infoblox, and Juniper, shaping the future of enterprise cloud, hyper-automation, and data protection. Follow EM360Tech for more insights: www.em360tech.com@EM360TechEM360TechEM360Tech

Duration:00:23:05

Ask host to enable sharing for playback control

Are Your Keys Safe? Why HSMs Are Now Essential for Cloud, Quantum, and AI Security

10/29/2025
"You have to think about how the online world really operates and how we make sure that data is secure. How can we trust each other in the digital world?" Robert Rogenmoser, the CEO of Securosys, asks. The answer is "encryption and digital signature." According to Robert Rogenmoser, the CEO of Securosys, storing keys insecurely creates immediate risk. This makes it crucial to maintain strong key security. "If it's just in a software system, you can easily get hacked. If I have your encryption key, I can read your data. If I have your Bitcoin keys, I can spend your money,” says Rogenmoser. In the recent episode of The Security Strategist podcast, host Richard Stiennon, Chief Research Analyst at IT-Harvest, speaks to Robert Rogenmoser, the CEO of Securosys, about safeguarding the digital world with cryptographic keys. Rogenmoser puts up a case to rally Hardware Security Modules (HSMs) as the best solution for this critical challenge. In addition to discussing how hardware security modules (HSMs) protect encryption keys, they also talk about the evolution of HSMs, their applications in financial services, the implications of post-quantum cryptography, and the integration of AI in security practices. Are Hardware Security Modules (HSMs) the Ultimate Solution? The conversation stresses the importance of key management and the need for organisations to adapt to emerging technologies while ensuring data security. In order to mitigate the cybersecurity risks, the priority is to securely store the keys, control access, and generate impenetrable keys that cannot be easily guessed by cyber criminals. HSMs are the ultimate solution to the key issue, believes Rogenmoser. Firms tend to shift their data to the cloud, making it even more essential to secure keys. The main challenge arises when both the data and the keys are managed by the same cloud provider, as this setup can compromise the integrity of key control and raise concerns about data sovereignty. However, Securosys approaches this challenge differently. Rogenmoser explains that organisations can keep their data encrypted in the cloud. At the same time, they keep the key somewhere else, where only they have control over it. Multi-Authorisation System for High-Stakes Transactions Rogenmoser pointed out the company's patented system for multi-authorisation of Bitcoin keys. This system is essential because blockchain transactions are high-stakes and irreversible. "Crypto custody for bitcoins or any cryptocurrency is a major business for our HSM," he said. Banks that hold large amounts of customer crypto cannot afford a single point of failure. "A blockchain operation is a one-way thing. You sign a transaction, and the money is gone." The multi-authorisation system addresses this issue by requiring a "quorum" of people to approve each transaction. Rogenmoser explained, "You can say this transaction can only be signed and sent to the blockchain if one out of three compliance officers signs this, plus two out of five traders." This approach creates a "more secure system" because "the HSM then checks, do we have a quorum? Did everyone actually sign the same transaction?"...

Duration:00:19:19

Ask host to enable sharing for playback control

How are 5G and Edge Computing Powering the Future of Private Networks?

10/27/2025
"5G is becoming a great enabler for industries, enterprises, in-building connectivity and a variety of use cases, because now we can provide both the lowest latency and the highest bandwidth possible,” states Ganesh Shenbagaraman, Radisys Head of Standards, Regulatory Affairs & Ecosystems. In the recent episode of the Tech Transformed podcast, Shubhangi Dua, Podcast Host, Producer, and Tech Journalist at EM360Tech, speaks to Shenbagaraman about 5G and edge computing and how they power private networks for various industries, from manufacturing, national security to space. The Radisys’ Head of Standards believes in the idea of combining 5G with edge computing for transformative enterprise connectivity. If you’re a CEO, CIO, CTO, or CISO facing challenges of keeping up the pace with capacity, security and quality, this episode is for you. The speakers provide a guide on how to achieve next-gen private networks and prepare for the 6G future. Real-Time Control The growing need for real-time applications, such as high-quality live video streams and small industrial sensors with instant responses, demands data processing to occur closer to the source than ever before. Alluding to the technical solution that provides near-zero latency and ensures data security, Shenbagaraman says: "By placing the 5G User Plane Function (UPF) next to local radios, we achieve near-zero latency between wireless and application processing. This keeps sensitive data secure within the enterprise network." Such a strategy has now become imperative in handling both high-volume and mission-critical low-latency data all at the same time. Radisys addresses key compliance and confidentiality issues by storing the data within a private network. Essentially, they create a safe security framework that yields near-zero latency to guarantee utmost data security. Powering Edge Computing Applications The real-world benefit of this zero-latency setup is the power it gives to edge computing applications. As the user plane function is the network's final data exit point, positioning the processing application near it assures prompt perspicuity and action. "The devices could be sending very domain-specific data,” said Shenbagaraman. “The user plane function immediately transfers it to the application, the edge application, where it can be processed in real time." It reduces errors and improves the efficiency of tasks through the Radisys platform, with the results meeting all essential requirements, including compliance needs. One such successful use case spotlighted in the podcast is the Radisys work with Lockheed Martin’s defence applications. "We enabled sophisticated use cases for Lockheed Martin by leveraging the underlying flexibility of 5G,” the Radisys speaker exemplified. Radisys team customised 5G connectivity for the US defence sector. It incorporated temporary, ad-hoc networks in challenging terrains using Internet Access Backhaul. It also covered isolated, permanent private networks for locations such as maintenance...

Duration:00:25:02

Ask host to enable sharing for playback control

How Do You Make AI Agents Reliable at Scale?

10/27/2025
Now that companies have begun leaping into AI applications and adopting agentic automation, new architectural challenges are bound to emerge. With every new technology comes high responsibility, consequences and challenges. To help face and overcome some of these challenges, Temporal introduced the concept of “durable execution.” This concept has quickly become an integral part of building AI systems that are not just scalable but also reliable, observable and manageable. In this episode of the Tech Transformed podcast, host Kevin Petrie, VP of Research at BARC, sits down with Samar Abbas, Co-founder and CEO of Temporal Technologies. They talk about durable execution and its critical role in driving AI innovation within enterprises. They discuss Abbas’s extensive background in software resilience, the development of application architectures, and the importance of managing state and reliability in AI workflows. The conversation also touches on the collaboration between developers, data teams, and data scientists, emphasising how durable execution can enhance productivity and governance in AI initiatives. Also Watch: Developer Productivity 5X to 10X: Is Durable Execution the Answer to AI Orchestration Challenges? Chatbots to Autonomous Agents “AI agents are going to get more and more mission critical, more and more longer lived, and more asynchronous," Abbas tells Petrie. “They’ll require more human interaction, and you need a very stable foundation to build these kinds of application architectures.” AI not just fuels chatbots today. Enterprises are increasingly experimenting with agentic workflows—autonomous AI agents that carry out complex background tasks independently. For example, agents can assign, solve, and submit software issues using GitHub pull requests. Such a setup isn’t just a distant vision; the Temporal co-founder pointed to OpenAI’s Codex as a real-world case. With this approach, AI becomes a system that can handle hundreds of tasks at once, potentially achieving "100x orders of magnitude velocity," as Abbas described. However, there are some architectural difficulties to stay mindful of. The AI agents are non-deterministic by nature and often depend on large language models (LLMs) like OpenAI’s GPT, Anthropic’s Claude, or Google’s Gemini. They reason based on probabilities, and they improvise. They often make decisions that are hard to trace or manage. AI workflows as simple code This is where Temporal comes in. It becomes the executioner that keeps the system cohesive and in alignment. “What we are trying to solve with Temporal and durable execution more generally is that we tackle challenging distributed systems problems," said Abbas. Rather than developers stressing over queues, retries, or building their own reliability layers, Temporal allows them to write their AI workflows as simple code. Temporal takes care of everything else—reliable state management, retrying failed tasks, orchestrating asynchronous services, and ensuring uptime regardless of what fails below the surface. As agent-based architectures become more common, the demand for this kind of system-level orchestration will only increase. Listen to the full conversation on the Tech...

Duration:00:25:41