Tech Transformed-logo

Tech Transformed

Technology Podcasts

Expert-driven insights and practical strategies for navigating the future of AI and emerging technologies in business. Led by an ensemble cast of expert interviewers offering in-depth analysis and practical advice to make informed decisions for your enterprise.

Location:

United Kingdom

Description:

Expert-driven insights and practical strategies for navigating the future of AI and emerging technologies in business. Led by an ensemble cast of expert interviewers offering in-depth analysis and practical advice to make informed decisions for your enterprise.

Twitter:

@EM360Tech

Language:

English

Contact:

+44 207 148 4444


Episodes
Ask host to enable sharing for playback control

Can AI Tools Actually Prevent Burnout — or Are They Making It Worse?

11/6/2025
““Without healthy employees, you don’t have healthy customers. And without healthy customers, you don’t have a healthy bottom line.” — Kate Visconti, Founder and CEO, Five to Flow. While artificial intelligence (AI) has hastened development and made enterprises more efficient, it also comes with more deadlines. The deadlines often merge into after-hours messaging. Burnout has become a default result of productivity, especially in the tech industry. In this episode of the Tech Transformed podcast, Shubhangi Dua, Podcast Host, Producer and B2B Tech Journalist, speaks with Kate Visconti, Founder and CEO of Five to Flow, about the critical issues of burnout and disengagement in the workplace. They discuss the five core elements of change management, the financial implications of employee wellness, and strategies for enhancing productivity through flow optimisation. Also Watch: Fixing the Gender Gap in STEM The Wellness Wave Diagnostic to Help Fix Profit Leaks Visconti stresses the importance of creating a supportive work environment and implementing effective change management practices to improve organisational performance. The conversation also highlights the role of technology in productivity and the need for leaders to prioritise employee well-being to drive business success. With an ambition to change the way organisations define true performance, VIsconti developed a system – a data-driven framework called The Wellness Wave. As per the official Five to Flow website, The Wellness Wave is “a proprietary diagnostic that measures sentiment and business performance across five core elements.” Visconti sheds light on the original framework of the company. She says, “The original was adopted when we first kicked off as part of our consulting, and it's called the Wellness Wave diagnostic. It’s literally looking across the five core elements — people, culture, process, technology, and analytics.” This framework helps companies identify and fix their profit leaks, which are the hidden financial losses caused by employee burnout, disengagement, and distraction. In her conversation with Dua, host of the Tech Transformed podcast episode, Visconti shares how understanding human behaviour can lead to significant improvements in business performance. According to Five to Flow’s global diagnostics, only 13 per cent of flow triggers work at their best. For tech leaders, that means most teams are functioning well below their potential. Kate’s top tip is to create flow blocks. “It’s about designing uninterrupted time for peak focus. This is when your brain isn’t in a stress state. For me, it’s mornings with my coffee. For others, it might be in the afternoon. Communicate those times to your team and protect them like meetings.” These flow blocks aren’t just productivity tricks; they show that focus is more important than frantic multitasking. “Multitasking is a fallacy,” Kate says. “You’re just rapidly switching tasks and burning through mental...

Duration:00:33:10

Ask host to enable sharing for playback control

Beyond the Hyperscalers: Building Cyber Resilience on Independent Infrastructure

11/3/2025
“Cyber resilience isn’t just about protection, it’s about preparation.” Every business in this day and age lives in the cloud. Our operations, data, and collaboration tools are powered by servers located invisibly around the world. But here’s the question we often overlook: what happens when the cloud falters? In this episode of Tech Transformed, Trisha Pillay sits down with Jan Ursi, Vice President of Global Channels at Keepit, to uncover the real meaning of cyber resilience in a cloud-first world. Are you putting all your trust in hyperscale cloud providers? Think again. Trisha and Jan explore why relying solely on giants like Microsoft or Amazon can put your data at risk and how independent infrastructure gives organisations control, faster recovery, and true digital sovereignty. Takeaways: Chapters: 00:00 – Introduction to Cyber Resilience and Cloud Strategy 05:00 – The Importance of Independent Infrastructure 10:00 – Shared Responsibility and Misconceptions 15:00 – Digital Sovereignty and Compliance 20:00 – Practical Tips for CISOs and CIOs 22:00 – Conclusion About Jan Ursi: Jan Ursi leads Keepit’s global partnerships, helping organisations embrace the AI-powered cyber resilience era. Keepit is the world’s only independent cloud dedicated to SaaS data protection, security, and recovery. Jan has previously built and scaled businesses at Rubrik, UiPath, Nutanix, Infoblox, and Juniper, shaping the future of enterprise cloud, hyper-automation, and data protection. Follow EM360Tech for more insights: www.em360tech.com@EM360TechEM360TechEM360Tech

Duration:00:23:05

Ask host to enable sharing for playback control

Are Your Keys Safe? Why HSMs Are Now Essential for Cloud, Quantum, and AI Security

10/29/2025
"You have to think about how the online world really operates and how we make sure that data is secure. How can we trust each other in the digital world?" Robert Rogenmoser, the CEO of Securosys, asks. The answer is "encryption and digital signature." According to Robert Rogenmoser, the CEO of Securosys, storing keys insecurely creates immediate risk. This makes it crucial to maintain strong key security. "If it's just in a software system, you can easily get hacked. If I have your encryption key, I can read your data. If I have your Bitcoin keys, I can spend your money,” says Rogenmoser. In the recent episode of The Security Strategist podcast, host Richard Stiennon, Chief Research Analyst at IT-Harvest, speaks to Robert Rogenmoser, the CEO of Securosys, about safeguarding the digital world with cryptographic keys. Rogenmoser puts up a case to rally Hardware Security Modules (HSMs) as the best solution for this critical challenge. In addition to discussing how hardware security modules (HSMs) protect encryption keys, they also talk about the evolution of HSMs, their applications in financial services, the implications of post-quantum cryptography, and the integration of AI in security practices. Are Hardware Security Modules (HSMs) the Ultimate Solution? The conversation stresses the importance of key management and the need for organisations to adapt to emerging technologies while ensuring data security. In order to mitigate the cybersecurity risks, the priority is to securely store the keys, control access, and generate impenetrable keys that cannot be easily guessed by cyber criminals. HSMs are the ultimate solution to the key issue, believes Rogenmoser. Firms tend to shift their data to the cloud, making it even more essential to secure keys. The main challenge arises when both the data and the keys are managed by the same cloud provider, as this setup can compromise the integrity of key control and raise concerns about data sovereignty. However, Securosys approaches this challenge differently. Rogenmoser explains that organisations can keep their data encrypted in the cloud. At the same time, they keep the key somewhere else, where only they have control over it. Multi-Authorisation System for High-Stakes Transactions Rogenmoser pointed out the company's patented system for multi-authorisation of Bitcoin keys. This system is essential because blockchain transactions are high-stakes and irreversible. "Crypto custody for bitcoins or any cryptocurrency is a major business for our HSM," he said. Banks that hold large amounts of customer crypto cannot afford a single point of failure. "A blockchain operation is a one-way thing. You sign a transaction, and the money is gone." The multi-authorisation system addresses this issue by requiring a "quorum" of people to approve each transaction. Rogenmoser explained, "You can say this transaction can only be signed and sent to the blockchain if one out of three compliance officers signs this, plus two out of five traders." This approach creates a "more secure system" because "the HSM then checks, do we have a quorum? Did everyone actually sign the same transaction?"...

Duration:00:19:19

Ask host to enable sharing for playback control

How are 5G and Edge Computing Powering the Future of Private Networks?

10/27/2025
"5G is becoming a great enabler for industries, enterprises, in-building connectivity and a variety of use cases, because now we can provide both the lowest latency and the highest bandwidth possible,” states Ganesh Shenbagaraman, Radisys Head of Standards, Regulatory Affairs & Ecosystems. In the recent episode of the Tech Transformed podcast, Shubhangi Dua, Podcast Host, Producer, and Tech Journalist at EM360Tech, speaks to Shenbagaraman about 5G and edge computing and how they power private networks for various industries, from manufacturing, national security to space. The Radisys’ Head of Standards believes in the idea of combining 5G with edge computing for transformative enterprise connectivity. If you’re a CEO, CIO, CTO, or CISO facing challenges of keeping up the pace with capacity, security and quality, this episode is for you. The speakers provide a guide on how to achieve next-gen private networks and prepare for the 6G future. Real-Time Control The growing need for real-time applications, such as high-quality live video streams and small industrial sensors with instant responses, demands data processing to occur closer to the source than ever before. Alluding to the technical solution that provides near-zero latency and ensures data security, Shenbagaraman says: "By placing the 5G User Plane Function (UPF) next to local radios, we achieve near-zero latency between wireless and application processing. This keeps sensitive data secure within the enterprise network." Such a strategy has now become imperative in handling both high-volume and mission-critical low-latency data all at the same time. Radisys addresses key compliance and confidentiality issues by storing the data within a private network. Essentially, they create a safe security framework that yields near-zero latency to guarantee utmost data security. Powering Edge Computing Applications The real-world benefit of this zero-latency setup is the power it gives to edge computing applications. As the user plane function is the network's final data exit point, positioning the processing application near it assures prompt perspicuity and action. "The devices could be sending very domain-specific data,” said Shenbagaraman. “The user plane function immediately transfers it to the application, the edge application, where it can be processed in real time." It reduces errors and improves the efficiency of tasks through the Radisys platform, with the results meeting all essential requirements, including compliance needs. One such successful use case spotlighted in the podcast is the Radisys work with Lockheed Martin’s defence applications. "We enabled sophisticated use cases for Lockheed Martin by leveraging the underlying flexibility of 5G,” the Radisys speaker exemplified. Radisys team customised 5G connectivity for the US defence sector. It incorporated temporary, ad-hoc networks in challenging terrains using Internet Access Backhaul. It also covered isolated, permanent private networks for locations such as maintenance...

Duration:00:25:02

Ask host to enable sharing for playback control

How Do You Make AI Agents Reliable at Scale?

10/27/2025
Now that companies have begun leaping into AI applications and adopting agentic automation, new architectural challenges are bound to emerge. With every new technology comes high responsibility, consequences and challenges. To help face and overcome some of these challenges, Temporal introduced the concept of “durable execution.” This concept has quickly become an integral part of building AI systems that are not just scalable but also reliable, observable and manageable. In this episode of the Tech Transformed podcast, host Kevin Petrie, VP of Research at BARC, sits down with Samar Abbas, Co-founder and CEO of Temporal Technologies. They talk about durable execution and its critical role in driving AI innovation within enterprises. They discuss Abbas’s extensive background in software resilience, the development of application architectures, and the importance of managing state and reliability in AI workflows. The conversation also touches on the collaboration between developers, data teams, and data scientists, emphasising how durable execution can enhance productivity and governance in AI initiatives. Also Watch: Developer Productivity 5X to 10X: Is Durable Execution the Answer to AI Orchestration Challenges? Chatbots to Autonomous Agents “AI agents are going to get more and more mission critical, more and more longer lived, and more asynchronous," Abbas tells Petrie. “They’ll require more human interaction, and you need a very stable foundation to build these kinds of application architectures.” AI not just fuels chatbots today. Enterprises are increasingly experimenting with agentic workflows—autonomous AI agents that carry out complex background tasks independently. For example, agents can assign, solve, and submit software issues using GitHub pull requests. Such a setup isn’t just a distant vision; the Temporal co-founder pointed to OpenAI’s Codex as a real-world case. With this approach, AI becomes a system that can handle hundreds of tasks at once, potentially achieving "100x orders of magnitude velocity," as Abbas described. However, there are some architectural difficulties to stay mindful of. The AI agents are non-deterministic by nature and often depend on large language models (LLMs) like OpenAI’s GPT, Anthropic’s Claude, or Google’s Gemini. They reason based on probabilities, and they improvise. They often make decisions that are hard to trace or manage. AI workflows as simple code This is where Temporal comes in. It becomes the executioner that keeps the system cohesive and in alignment. “What we are trying to solve with Temporal and durable execution more generally is that we tackle challenging distributed systems problems," said Abbas. Rather than developers stressing over queues, retries, or building their own reliability layers, Temporal allows them to write their AI workflows as simple code. Temporal takes care of everything else—reliable state management, retrying failed tasks, orchestrating asynchronous services, and ensuring uptime regardless of what fails below the surface. As agent-based architectures become more common, the demand for this kind of system-level orchestration will only increase. Listen to the full conversation on the Tech...

Duration:00:25:41

Ask host to enable sharing for playback control

How To Maintain Human Connection In An AI World?

10/21/2025
For CISOs and technology leaders, AI is reshaping business process management and daily operations. It can automate routine tasks and analyse data, but the human element remains critical for workforce oversight, customer interactions, and strategic decision-making. In this episode of Tech Transformed, Trisha Pillay talks with Anshuman Singh, CEO of HGS UK, about AI in the workplace. They discuss how AI can support employees, improve customer service, and require careful oversight. Singh also shares insights on preparing organisations for AI integration and trends leaders should watch in the coming years. Questions or comments? Email info@em360tech.com or follow us on YouTube, Instagram, and Twitter @EM360Tech. Takeaways Chapters 00:00 Introduction to AI and Human Element 03:03 AI's Impact on Workforce Dynamics 08:29 The Role of Human Oversight in AI 10:46 AI Innovations in Customer Service 16:34 Positioning for Growth in Business Process Management 20:01 Preparing the Workforce for AI Integration 25:35 Emerging Trends in AI and Workforce 29:19 Final Thoughts on AI and Ethics

Duration:00:23:48

Ask host to enable sharing for playback control

AI-Powered Canvases: The Future of Visual Collaboration and Innovation

9/29/2025
In today’s fast-moving world, teams can’t afford to lose momentum. Yet traditional whiteboards and digital tools often slow the leap from ideas to execution. This is where AI-powered canvases change the game—helping teams move from collaboration to outcomes faster. In this episode of Tech Transformed, Kevin Petrie, VP of Research at BARC, joins Elaina O’Mahoney, Chief Product Officer at Mural, to explore how AI collaboration tools are accelerating teamwork. From customer journey mapping to process design, AI-powered canvases help teams: Whether hybrid, remote, or in-person, visual collaboration ensures no one misses a beat on the way to their goals. With AI guiding the process, teams unlock alignment and deliver results faster—without sacrificing human creativity at the center. AI-Powered Canvases, Visuals, and Collaboration A central theme in the conversation is the distinction between automation and augmentation. While AI can recommend activities, map processes, and identify participation patterns, decision-making remains a human responsibility. As O’Mahoney explains: “In the Mural canvas experience, we’re looking to draw out the ability of a skilled facilitator and give it to participants without them having to learn that skill over the years.” This balance ensures that while AI-powered canvases streamline collaboration, teams still rely on human judgment, creativity, and contextual knowledge. One of the most powerful contributions is in AI-driven visuals, which can translate raw data or unstructured input into clear diagrams, journey maps, or process flows. These visuals not only accelerate understanding but also help teams spot gaps and opportunities more effectively. For example: The Role of Visual Tools in Hybrid Work In blended work environments, teams often lack the in-person cues that guide effective collaboration. Visual canvases bring those cues into the digital workspace, showing where ideas are concentrated, highlighting gaps in participation, and enabling alignment across dispersed teams. By combining intuitive design with AI-driven support, platforms like Mural help organisations adapt to the demands of hybrid work while keeping human creativity at the centre. Takeaways

Duration:00:19:11

Ask host to enable sharing for playback control

Setting Up for Success: Why Enterprises Need to Harness Real-Time AI to Ensure Survival

9/17/2025
The issue is data fragmentation, where untrustworthy data is siloed across different databases, SaaS applications, warehouses, and on-premise systems,” Vladimir Jandreski, Chief Product Officer at Ververica, tells Christina Stathopoulos, the Founder of Dare to Data. “Simply, there is no single view of the truth that exists. With governance and data quality checks, these are often inconsistent, AI systems end up consuming incomplete or conflicting signals,” he added, setting the stage for the podcast. In this episode of the Don't Panic, It's Just Data podcast, Stathopoulos speaks with Jandreski about the vital role of unified streaming data platforms in facilitating real-time AI. They discuss the difficulties businesses encounter when implementing AI, the significance of going beyond batch processing, and the skills necessary for a successful streaming data platform. Applications in the real world, especially in e-commerce and fraud detection, show how real-time data can revolutionise AI strategies. Your AI Could Be a Step Behind Jandreski says that most organisations continue to be engineered on batch-first data systems. That means, they still process information in chunks—often hours or even days later. “It's fine for reporting, but it means your AI is always going to be one step behind.” However, “the unified streaming platform flips that model from data at rest to data in motion.” A unified platform will “continuously capture the pulse” of the business and feed it directly to AI for automated real-time decision making. Challenges of Agentic AI Considering that the world is moving toward the era of agentic AI, there are some key challenges that still need to be addressed. Agentic AI means autonomous agents make real-time decisions, maintain memory, use tools and collaborate among themselves. Because they act on their own decisions, regulating them is necessary. Building agents is not the main challenge, but the real challenge is “actually giving them the right infrastructure.” Jandreski highlights. Alluding to an example of AI prototyping frameworks such as Longchain or Lama Index, he further explained that those frameworks work for demos. In reality, however, they can’t support a long-running system trigger workflows that demand high availability, fault tolerance, and deep integration with the enterprise data. This is because enterprises have multiple systems, and many of them are not connected. This way, the data forms into silos. When data is in silos, a unified streaming data platform becomes the key solution. “It provides a real-time event-driven contextual runtime where AI agents need to move from the lab experiments to production reality.” Takeaways

Duration:00:19:09

Ask host to enable sharing for playback control

How Can AI Bridge the Gap from Observability to Understandability?

9/12/2025
"The tools we make are observability tools today. But it can never be the goal of our business to provide observability. The goal of our business as a vendor and as a partner with our customers is to give them understandability,” stated Nic Benders, the Chief Technical Strategist at New Relic. In this episode of the Don't Panic It's Just Data podcast, host Christina Stathopoulos, the Founder of Dare to Data, speaks with Benders about where observability is headed in IT systems. They discuss how AI is transforming observability into a more comprehensive understanding of complex systems, moving beyond traditional monitoring to achieve true understandability. Benders explained the importance of merging various data types to provide a complete picture of system performance and user experience. He believes AI can bridge the gap between mere observation of systems and a deeper understanding of their functionality. This could ultimately lead to enhanced incident response and operational efficiency. With maturing technology, complexity is expected to grow, too. The straightforward act of “observing” those complexities is like watching a green light on a machine. This is not enough. The major challenge is to “understand” the inside operations of the machine. This is the difference between simply seeing the data and knowing the "why." Observability to Understandability As per Benders, the term observability "leaves a lot to be desired." While it’s the industry’s common label, it only describes seeing a system. The real goal, he argues, is to understand it. Alluding to an analogy, the technical strategist asks Stathopoulos to imagine a nuclear power plant full of a million blinking lights and screens. “You can have all the observability available, but if you're not an expert, you won't grasp what’s actually happening,” says Benders. Typically, software has been developed by a single person who knows every inch of it. However, today, technology has become more perplexing. AI, alongside teamwork and collaboration, provides the tools to solve this problem. An engineer might manage code they didn’t write, making a dashboard full of charts unhelpful. Understandability means moving beyond raw data to give context and meaning. Ultimately, Benders advises IT leaders to embrace change. The tech industry is constantly changing and advancing. Instead of fearing new tools, organizations should focus on what they need to grasp the unknown. As he puts it, "a lot of unknown is coming over the next few decades." Takeaways Chapters

Duration:00:29:15

Ask host to enable sharing for playback control

Not just Chatbots: What AI Agents Really Mean for Enterprises

9/10/2025
The phrase “AI agent” still brings to mind chatbots handling customer queries. Fast forward to today - AI agents are far more versatile, representing a new generation of systems capable of perceiving, reasoning, and acting autonomously. These bots are beginning to reshape how enterprises operate, not just in customer service but across software development, data analytics, and operational workflows. In this episode of Tech Transformed, Dare To Data Founder Christina Stathopoulos explores the rapid rise of AI agents with Ben Gilman, CEO of Dualboot Partners. Together, they unpack how AI agents differ from traditional automation and what this shift means for software development, enterprise operations, and the future of productivity. AI Agents vs. Traditional Automation Unlike traditional automation, which follows strict, deterministic rules, AI agents can adapt to changing inputs, analyze complex data sets, and make autonomous decisions within defined parameters. This allows them to tackle tasks that were previously too intricate or time-consuming for automated systems. Dualboot Partners helps organizations harness these AI agents, integrating them into workflows to deliver real business value through a combination of product, design, and engineering expertise. “The biggest difference with an AI agent, between a standard tool, is that the agent can perceive information and reason about it, providing context and insights you don’t normally get in an algorithm.” — Ben Gilman, CEO, Dual Boot Partners. The Future of AI in Enterprise Organisations face several hurdles when integrating AI agents, including defining clear use cases, understanding the probabilistic nature of AI reasoning, and incorporating agents into existing processes and workflows. Despite the challenges, the potential payoff is substantial. AI agents can boost productivity, improve decision-making, and make enterprises more agile. As these systems mature, humans and AI are increasingly collaborating as true partners, reshaping what the workplace and work itself look like. Takeaways: Chapters 0:00 - 3:00: Introduction to AI Agents 3:01 - 6:00: Differences from Traditional Automation 6:01 - 12:00: Real-World Applications and Examples 12:01 - 18:00: Challenges in Adoption 18:01 - 22:00: Future Impact on Tech and Operations 22:01 - 24:00: Conclusion and Final Thoughts About Dualboot Partners

Duration:00:22:19

Ask host to enable sharing for playback control

How to Prepare Your Team for Edge Computing?

9/4/2025
In a time when the world is run by data and real-time actions, edge computing is quickly becoming a must-have in enterprise technology. In the recent episode of the Tech Transformed podcast, hosted by Shubhangi Dua, a Podcast Producer and B2B Tech Journalist, discusses the complexities of this distributed future with guest Dmitry Panenkov, Founder and CEO of emma. The conversation dives into how latency is the driving force behind edge adoption. Applications like autonomous vehicles and real-time analytics cannot afford to wait on a round trip to a centralised data centre. They need to compute where the data is generated. Rather than viewing edge as a rival to the cloud, the discussion highlights it as a natural extension. Edge environments bring speed, resilience and data control, all necessary capabilities for modern applications. Adopting Edge Computing For organisations looking to adopt edge computing, this episode lays out a practical step-by-step approach. The skills necessary in multi-cloud environments – automation, infrastructure as code, and observability – translate well to edge deployments. These capabilities are essential for managing the unique challenges of edge devices, which may be disconnected, have lower power, or be located in hard-to-reach areas. Without this level of operational maturity, Panenkov warns of a "zombie apocalypse" of unmanaged devices. Simplifying Complexity Managing different APIs, SDKs, and vendor lock-ins across a distributed network can be a challenging task, and this is where platforms like emma become crucial. Alluding to emma’s mission, Panenkov explains, "We're building a unified platform that simplifies the way people interact with different cloud and computer environments, whether these are in a public setting or private data centres or even at the edge." Overall, emma creates a unified API layer and user interface, which simplifies the complexity. It helps businesses manage, automate, and scale their workloads from a singular perspective and reduces the burden on IT teams. They also reduce the need for a large team of highly skilled professionals leads to substantial cost savings. emma’s customers have experienced that their cloud bills went down significantly and updates could be rolled out much faster using the platform. Takeaways

Duration:00:23:38

Ask host to enable sharing for playback control

How Can Manufacturers Solve the Mass Customisation Problem?

8/26/2025
"The real challenge that many manufacturers have dealt with for a long time and will keep facing is the shift from mass manufacturing to mass customisation," stated Daniel Joseph Barry, VP of Product Marketing at Configit. In a world that has moved from mass manufacturing to mass customisation, makers of complex products like cars and medical devices face a hidden problem. For more than a century, since the time of Henry Ford, manufacturers have worked in a separate, mass-production mindset. This method in the recent industrial scenario has caused a lot of friction and frustration. In this episode of the Tech Transformed podcast, Christina Stathopoulos, Dare To Data Founder, talks with Daniel Joseph Barry, VP of Product Marketing at Configit. They talk about Configuration Lifecycle Management (CLM) and its importance in tackling the challenges that manufacturers of complex products face recurrently. The speakers discuss the move from mass manufacturing to mass customisation, the various choices available to consumers, and the need to connect sales and engineering teams. Barry emphasises the value of working together to tackle these challenges. He points out that using CLM can make processes easier and enhance customer experiences (CX). What is Configuration Lifecycle Management (CLM) According to Barry, Configuration Lifecycle Management (CLM) is an approach that involves managing product configurations throughout their lifecycle. He describes it as an extension of Product Lifecycle Management (PLM) that focuses specifically on configurations. In today's highly bespoke world, customers are buying configurations of products instead of just the products themselves. The answer isn't to work harder within existing teams but to adopt a new, collaborative approach. This is where Configuration Lifecycle Management (CLM) comes in. CLM creates a single, shared source of truth for all product configuration information. It combines data from engineering, sales, and manufacturing. Configit’s patented Virtual Tabulation® (VT™) technology pre-computes all the different options, so there’s no longer a need for slow, real-time calculations. Barry says, "It's just a lookup, so it's lightning fast.” This represents a prominent shift that removes the delays and dead ends, frustrating customers and sales staff. Such a centralised system makes sure that every department uses the same, verified information, stopping errors from happening later on. One such company, and Configit’s customer, Vestas, a wind power company, automated its configuration process for complex wind turbines that have 160,000 options. By adopting a CLM approach, they cut the time to configure a solution from 60 minutes to just five. Tune into the podcast for more information on the transformational impact of Configuration Lifecycle Management (CLM). Takeaways

Duration:00:38:17

Ask host to enable sharing for playback control

How Enterprises Can Leverage IoT and AI to Improve Efficiency and Sustainability

8/19/2025
As global industries face mounting pressure to operate more efficiently and sustainably, many are turning to the combined power of artificial intelligence (AI) and the Internet of Things (IoT). From optimising energy usage to enabling real-time decision-making, these technologies are reshaping how businesses think about infrastructure, impact, and innovation. But the road to adoption isn’t without its challenges, from data literacy to greenwashing. In this episode of Tech Transformed, Em360Tech host Trisha Pillay talks with Akanksha Sharma, Senior Director at the GSMA Foundation, about how these emerging technologies are creating tangible value, especially for small and medium-sized enterprises (SMEs) and industries with legacy systems like utilities. IOT and AI Sharma highlights that the 2020s will be remembered as the decade when IoT experiences exponential growth, supported by data from GSMA Intelligence projecting over 37 billion IoT connections worldwide by 2030, more than doubling the number recorded in 2021. She notes that, unlike previous technological waves, AI adoption is accelerating rapidly, moving from niche awareness to mainstream use within just a few years. When discussing climate action and carbon markets, Sharma stresses the need for transparent, data-backed verification mechanisms. She warns against superficial greenwashing practices and advocates for AI systems that prioritise accuracy and ethical standards to ensure genuine environmental benefits. Takeaways Chapters: 00:00 – Transforming Sustainability with Data-Driven Infrastructure 03:05 – The Role of AI and IoT in Enterprises 09:10 – Challenges in Operational Efficiency and Sustainability 13:42 – Real-World Impact of AI and IoT 16:57 – Carbon Markets and Digital Solutions 21:08 – Understanding Greenwashing 23:30 – Barriers to Technology Adoption 26:17 – Key Takeaways and Predictions About Akanksha Sharma Akanksha Sharma leads the ClimateTech and Digital Utilities programmes at GSMA, where she drives innovation at the...

Duration:00:25:04

Ask host to enable sharing for playback control

Why Data Strategy Fails Without Data and AI Literacy

8/13/2025
Many companies spend a lot on data technology, but often forget about the importance of data and AI literacy. Without the right skills, even the best platforms can fail to deliver results. Teams need to understand how to work with data and AI to make any strategy successful. In this episode of Tech Transformed, EM360Tech’s Trisha Pillay chats with Greg Freeman, the founder of Data Literacy Academy about why knowing data and AI matters for anyone building a digital strategy. Data and AI Literacy Freeman points out that many data strategies end up as technical documents rather than actionable roadmaps. He explains that organisations often spend heavily on infrastructure, expecting better tools to solve their problems but without employees who understand how to work with data and why it matters, these investments rarely deliver results. Freeman explains that data strategies often fail because only a small portion of employees less than 20 per cent are truly enthusiastic about data. Most strategies are designed with this minority in mind, creating an echo chamber that leaves the majority behind. As a result, data stays siloed, and business decisions don’t improve. The Data Literacy Academy founder stresses that unless organisations engage the 80 per cent of employees who aren’t already invested, their strategies are unlikely to succeed. When the focus is on tools rather than people, adoption falls behind. Takeaways Chapters About Greg Freeman Greg Freeman is the founder and CEO of Data Literacy Academy, where he works with CDOs, CIOs, and business leaders to drive real cultural change around data. His mission is to help organisations tackle data illiteracy by building confidence and capability from the ground up, especially for employees who feel disengaged or anxious about data. With a background in sales leadership and tech startups, Greg brings both strategic insight and real-world experience.

Duration:00:26:42

Ask host to enable sharing for playback control

What Does the Future of CX Look Like with Agentic AI?

8/7/2025
"As agentic AI spreads across industries,” states Rishi Rana, the Chief Executive Officer at Cyara. “Everybody is curious to understand how that is going to transform customer experience across all the channels?" In this episode of the Tech Transformed podcast, Shubhangi Dua, the Host and Podcast Producer at EM360Tech, talks with Rishi Rana, the CEO of Cyara, about how agentic AI is changing customer experience (CX). They look at how AI has developed from simple chatbots to advanced systems that can understand and predict customer needs. Rana spotlights the need for ongoing testing and monitoring to make sure AI solutions work well and follow the regulations. They also discuss the obstacles businesses encounter when implementing AI, the importance of good data, and the future of AI agents in improving customer interactions. Agentic AI Transforming Customer Experience (CX) Customer experience (CX) is changing quickly and significantly, thanks to the rise of agentic AI. These advanced systems go beyond the basic chatbots of the past. While such a change may offer a future equipped with a smart, proactive customer journey, it doesn't come without its challenges. These obstacles require organisations to thoughtfully plan and carefully execute strategies. For years, chatbots provided a basic type of automated customer support. However, Rana explains that the evolution of AI is pushing boundaries. "AI in customer experience (CX) is changing from a basic level of chatbots that have been present for the last five or 10 years. Now they are turning into fully agentic systems that operate across voice, digital and human-assisted channels," said Rana. Moving Beyond Basic Chatbots Chatbots’ lucrative development lies in the strengths of Large Language Models (LLMs) like Google's Gemini, Meta's Llama, and OpenAI's ChatGPT. This is because the AI-backing models are facilitating "voice bots" and other AI agents to move beyond simple response automation to intelligent orchestration. Intelligent orchestration results in anticipating user needs, adjusting in real-time, and guiding customers to hybrid solutions where AI and human agents work together. Ultimately, the goal is to greatly improve the customer experience (CX). Studies suggest that 86 per cent of people are willing to pay more for the same service, no matter what it is, when the customer experience is better. Advancements don’t come without a price. Rana believes the lack of proper guardrails is a cause for concern. "AI is great, but you need to have guardrails and ensure the intent behind the questions and the objective behind the customer interaction is getting answered." This requires ongoing testing and monitoring across all channels to ensure consistency and avoid problems like hallucinations, misuse, or bias. These issues can result in major financial losses and damage to reputation. For instance, Rishi Rana mentioned that over "$10 billion in violations and liabilities due to incorrect information given to customers" occurred in 2024 alone. To successfully execute agentic AI, enterprises must shift left with AI by...

Duration:00:23:17

Ask host to enable sharing for playback control

Developer Productivity 5X to 10X: Is Durable Execution the Answer to AI Orchestration Challenges?

8/6/2025
"If you are not using it and don't understand it, you are losing big time because you reinvent that wheel of durable execution yourself, and it's hard," reasons Maxim Fateev, Co-Founder and CTO of Temporal Technologies. In a recent episode of the Tech Transformed podcast, Fateev explored the concept of durable execution. This approach not only improves software reliability but is also becoming essential for the next generation of AI agents and orchestration. What is Durable Execution? Durable execution, a concept trademarked by Temporal Technologies, changes how developers build reliable applications. "The idea is simple–we preserve the full state of code execution all the time," he explained. Imagine a function that makes a series of API calls. If the process running that function crashes, even days later, "we can bring that function back in exactly the same state, still blocked on the same API call with all the variables and state there, and deliver the response. Then it will continue to the next line of code." From a programming point of view, the function actually “never crashed. It just seamlessly waited for three days, blocked on that API call, and then continued execution," says Fateev. This ability to provide "crashless execution" changes how developers approach building reliable software. It allows functions to run for long periods, even a year, without interruption or data loss. Temporal's Role in OpenAI's Image Generation Alluding to Temporal’s use case, Fateev referenced their contribution to OpenAI’s image generation code. He stated, "Every time you generate the image using OpenAI, it uses Temporal behind the scenes." "Image generation is orchestration. It's not just like one API call. There are a lot of API calls which need to happen to generate an image, and Temporal’s tech guarantees execution." While a strong tool for AI, durable execution has many uses beyond that. Fateev notes that Temporal has also been used for "driving large-scale operations with an added productivity advantage, and it’s also for a startup with two people with a small-scale solution.” From infrastructure automation, like HashiCorp Cloud, and data handling to key business tasks such as customer onboarding and instant payments, including UPI in India and similar systems in Brazil, Temporal shows its worth in various industries. "Every Snapchat story is an important workflow,” Fateev tells Dua. Leading AI companies like Replit, Abridge, and OpenAI are using Temporal to power their workflows.” The main idea stays the same: "Nearly every time you need to run any business logic reliably, it works well." Takeaways

Duration:00:24:18

Ask host to enable sharing for playback control

Why an Agentic Data Management Platform is the Next Generation Data Stack

7/14/2025
Enterprise data management is undergoing a fundamental transformation. The traditional data stack built on rigid pipelines, static workflows, and human-led interventions is reaching its breaking point. As data volume, velocity, and variety continue to explode, a new approach is taking shape: agentic data management. In this episode of Tech Transformed, EM360Tech’s Trisha Pillay sits down with Jay Mishra, Chief Product and Technology Officer at Astera, to explore why agentic systems powered by autonomous AI agents, Large Language Models (LLMs), and semantic search are rapidly being recognised as the next generation of enterprise data architecture. The conversation explores the drivers behind this shift, real-world applications, the impact on data professionals, challenges faced by agentic platforms, and the future of data stacks. Jay emphasises the importance of starting small and measuring ROI to successfully implement agentic solutions. What is Agentic Data Management? At its core, agentic data management is the application of intelligent, autonomous agents that can perceive, decide, and act across complex data environments. Unlike traditional automation, which follows predefined scripts, agentic AI is adaptive and self-directed. These agents are capable of learning from user behaviour, integrating with different systems, and adjusting to changes in context, all without human prompts. As Jay explains, "An agentic system is one that has the agency to make decisions, solve problems, and orchestrate actions based on real-time data and context, not just on training data. Takeaways Chapters 00:00 Introduction to Agentic Data Management 02:58 Understanding Agentic Data Management 06:58 Drivers of Change in Data Management 10:03 Real-World Applications of Agentic AI 14:15 Impact on Data Engineers and Analysts 16:43 Challenges and Limitations of Agentic Data Platforms 20:03 Future of Data Stacks 23:31 Final Thoughts on Agentic Data Management About Jay Mishra Jay Mishra is the Chief Product and Technology Officer at Astera Software, with over two decades of experience in data architecture and data-centric software innovation. He has led the design and development of transformative solutions for major enterprises, including Wells Fargo, Raymond James, and Farmers Mutual. Known for his strategic insight, technical leadership, and passion for empowering organisations, Jay has consistently delivered intelligent, scalable solutions that drive...

Duration:00:23:35

Ask host to enable sharing for playback control

Are AI Agents the Future of Developer Productivity in the Enterprise?

7/10/2025
"There's a lot of hype with the AI agents and their productivity and potential outcomes. AI Agents are quite amazing, says Eric Paulsen, EMEA Field CTO at Coder. In this episode of the Tech Transformed podcast, Shubhangi Dua, Podcast Host and Producer at EM360Tech, talks to Paulsen about the constantly advancing role of AI agents in development environments. Paulsen explains how AI agents can help developers by handling simpler tasks, almost like having assistants or junior developers to assist them. Not only would this boost productivity and time efficiency, but the technology will also ensure human oversight. The conversation further explores how AI fits into cloud development environments, especially in regulated areas like finance, where security and scalability matter most. Paulsen stresses the value of internal AI models and points out Coder's unique role in offering infrastructure-neutral solutions that meet various enterprise needs. AI Agents Are More Than Just Code Writers When people hear "agentic AI" or "coding agents," there's often a misconception about fully autonomous coders. However, Paulsen clarifies, "That's a far stretch from where we currently have been, which is with just AI-assisted IDE extensions such as GitHub, Copilot, Amazon Q Developer and systems of that nature." Coder focuses on agentic solutions that have a human developer in the loop, emphasising Paulsen. “Think of an AI agent as a junior engineer working alongside you,” Paulsen explains. "If anything, it’s improving the output of the human engineer by having an autonomous or artificial or AI process. In the same development environment, working on other tasks that might not necessarily be as complex," he adds. This means developers can offload simple tasks like bug fixes or dependency updates, freeing them to focus on more complex features. How to Scale AI Agents Securely in Enterprises? For large financial institutions that have hundreds and even thousands of software engineers, deploying AI agents at scale requires a consistent and secure approach. Cloud development environments provide the best way to deliver and package these agents for developers. The main concern for enterprises is ensuring data security in addition to stopping AI agents from "running wild on a laptop." Paulsen stresses the need for agents to work within an "isolated compute," with "boundaries around those agents inside of that isolated compute." Such a secure environment provides guardrails to synchronise and boost productivity between humans and AI while preventing sensitive data breaches or "hallucinations" from the AI. Additionally, financial institutions are now increasingly developing their own internal AI models. Paulsen mentions, "What these institutions need is an AI agent that is trained on the internal dataset and internal LLM that is built within the firm so that it can make those decisions and return the relevant output to the data scientist or software engineer." This move towards self-hosted LLMs and internal AI infrastructure is essential for adopting enterprise-grade AI. The ultimate message is that cloud development environments should provide the framework where AI agents are running inside an enterprise’s infrastructure. “AI agents have access to the data, and they're observed and governed by a set of security standards that you have internally,” says the EMEA Field CTO at Coder. Takeaways AI agents can assist developers by handling simpler...

Duration:00:20:15

Ask host to enable sharing for playback control

The Death of Expertise: What AI Won’t Teach Us

7/9/2025
Are we heading for a future where AI knows everything but won’t bother explaining it to us? Advancements in artificial intelligence are rapidly transforming the way industries operate and influencing the future of society as a whole. AI has become a force behind breakthrough technologies such as big data analytics, robotics, and the Internet of Things (IoT). The rise of generative AI has only accelerated its adoption and broadened its impact across multiple sectors. Navigating the Displacement Dilemma In this episode of Tech Transformed, host Trisha Pillay at EM360Tech sits down with Nigel Cannings, author and AI expert, to explore one of the most pressing questions of our time: what happens to human expertise in the age of rapid AI advancement? Nigel Cannings warns that while technology promises efficiency and faster results, it also encourages dependency. Our patience has run thin, and in our rush for instant answers, we may be undermining the very systems that develop human expertise. “I’m kind of fascinated by the change we’ve seen in how we process information,” Cannings reflects. He describes the displacement dilemma as the idea that tools meant to democratize knowledge could actually erode the skills and pathways that build true mastery. He worries about people losing jobs or being too dependent on technology to even start careers. “I’m really interested to talk to people who’ve been affected by the displacement dilemma, people who are losing their jobs, people who think they’re going to lose their jobs, people who can see the erosion of expertise and skills,” Cannings explains. The Future of AI As artificial intelligence evolves at breakneck speed, we face a harsh reality: the gap between human and AI intelligence could become so wide that we might not even understand the systems we build. Worse still, AI itself may have no incentive to help us understand it. At that point, it stops being just a tool and becomes an autonomous entity with its private reality. In 2025, Chief AI Officers report an average AI ROI of 14 per cent, as many AI programs move beyond pilot programs to larger implementations at scale. This is proof that as AI continues to evolve at an unprecedented pace, understanding its implications is important, both for industries navigating these changes and for society adapting to a new technological landscape. Takeaways Chapters 00:00 Introduction to AI and Human Expertise 04:06 The Displacement Dilemma: Erosion of Expertise 06:59 Changing Information Consumption in the AI Era 10:07 Technical Aspects of AI: Data Centres and Encryption 15:42 Limitations of AI in Scientific Discovery 20:22 The Future of Superhuman AI 23:23 The Race in AI Development 28:18...

Duration:00:23:54

Ask host to enable sharing for playback control

Multi-Cloud & AI: Are You Ready for the Next Frontier?

7/8/2025
"AI may be both the driver and the remedy for multi-cloud adoption," says Dmitry Panenkov, Founder & CEO of emma, alluding to the vast potential and possibilities Artificial Intelligence (AI) and multi-cloud strategies offer. In this episode of the Tech Transformed podcast, Tom Croll, a Cybersecurity Industry Analyst and Tech Advisor at Lionfish, speaks to Panenkov. They talk about the intricacies of powering multi-cloud systems with AI, offering valuable insights for businesses aiming to tap into the full potential of both. They also discuss data fragmentation, interoperability issues, and security concerns. AI Adoption in Multi-Cloud Addressing the key challenges of AI adoption in multi-cloud environments, Panenkov spotlights one of the most prominent issues – data fragmentation. “AI thrives on unified data sets. But multi-cloud setups often lead to data silos across the different platforms,” the founder of emma, the cloud management platform, explained. Data silos creates a disconnect which makes it increasingly challenging for AI models. It makes it harder for AI models to access and process the huge amounts of data needed to function efficiently. Instead, Panenkov stresses the potential of AI to drive multi-cloud adoption by optimising workloads and automating policies. In addition to data fragmentation, the lack of interoperability and tooling presents another challenge when integrating AI with multi-cloud. This is where Inconsistent APIs, a lack of standardisation, and variations in cloud-native tools create major friction. The difference is evident when building AI pipelines across diverse environments. Panenkov also pointed out the impact of latency and performance. He says, "Even Kubernetes is sensitive to latency. When we talk about AI and inference, and I'm not even talking about the training, I'm saying that inference is also sensitive." Without proper networking solutions, running AI workloads effectively in multi-cloud environments becomes next to impossible. Of course, security and compliance are a looming challenge for all enterprises across varying industries. Managing data protection in different jurisdictions and environments adds layers of legal and operational complexity. Despite these challenges, AI has significant advantages in multi-cloud systems that well surpass any challenges. Intelligent Orchestration is the Key to Successful Multi-Cloud Adoption The main topic of the conversation was how AI can actually help overcome the complexities of multi-cloud adoption. As the...

Duration:00:23:45