Targeting AI-logo

Targeting AI

Technology Podcasts

Hosts Shaun Sutner, TechTarget News senior news director, and AI news writer Esther Ajao interview AI experts from the tech vendor, analyst and consultant community, academia and the arts as well as AI technology users from enterprises and advocates for data privacy and responsible use of AI. Topics are related to news events in the AI world but the episodes are intended to have a longer, more ”evergreen” run and they are in-depth and somewhat long form, aiming for 45 minutes to an hour in duration. The podcast will occasionally host guests from inside TechTarget and its Enterprise Strategy Group and Xtelligent divisions as well and also include some news-oriented episodes featuring Sutner and Ajao reviewing the news.

Location:

United States

Description:

Hosts Shaun Sutner, TechTarget News senior news director, and AI news writer Esther Ajao interview AI experts from the tech vendor, analyst and consultant community, academia and the arts as well as AI technology users from enterprises and advocates for data privacy and responsible use of AI. Topics are related to news events in the AI world but the episodes are intended to have a longer, more ”evergreen” run and they are in-depth and somewhat long form, aiming for 45 minutes to an hour in duration. The podcast will occasionally host guests from inside TechTarget and its Enterprise Strategy Group and Xtelligent divisions as well and also include some news-oriented episodes featuring Sutner and Ajao reviewing the news.

Language:

English


Episodes

It's looking like 2024 is the year of ROI for generative AI

2/26/2024
Generative AI vendors and investors have turned their attention from last year's innovative frenzy to ROI, monetizing the language models that have revolutionized the tech world in a short time. That's the outlook on 2024 from Kashyap Kompella, founder and analyst at RPA2AI Research, who was a guest on the Targeting AI podcast from TechTarget Editorial. "If we think about it, 2023 really was the year of shock and awe for AI technology," Kompella said on the podcast. "But I think in 2024, there is going to be some amount of focus -- if not sole focus -- on return on investment." At the same time, the tech landscape is seeing in 2024 an astonishing profusion of AI language models, from the ever-expanding power of large language models (LLMs) to the rise of small and open source models, and even models adapted for mobile devices, Kompella noted. "The burst of technological innovation will continue," he said. Investors looking at generative AI tech vehicles to pump venture funds into are hoping to hit "pay dirt" this year, as Kompella put it. "But the businesses and the organizations that are looking to implement AI systems, they're going to be also focused on business value and return on investment," he said. Meanwhile, 2024 is seeing a continuation and even ramping up of the litigation surrounding generative AI systems. There is also a growing emphasis on making generative AI systems safe by attempting to reduce or eliminate bias and inaccurate outputs. Everyone from comedian and author Sarah Silverman and best-selling novelist John Grisham to The New York Times are suing generative AI vendors for misappropriating their work. "Businesses are … becoming aware of some of the risks of using the AI systems." Kompella said. "So we'll see more indemnity clauses being offered by AI vendors." Looking at the swelling generative AI market, Kompella also noted that venture capital activity in the arena is accelerating after a strong year in 2023. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.

Duration:00:50:00

AI, hiring and the need for humans in the loop

2/12/2024
The fear of AI technology eliminating thousands of jobs or affecting the hiring process continues to prevail in the age of generative AI. While many believe that AI technology will augment workers, some are already seeing the effect of AI in the job market. Indeed, tech companies and other large enterprises have laid off thousands of workers in recent months, though staffing levels are mostly still higher than before the COVID-19 pandemic. ResumeBuilder.com found in a November 2023 survey that of 750 business leaders, 44% reported AI technology would cause layoffs in 2024. The presence of AI in the hiring process has also led to laws like New York's Local Law 144. It prevents employers from using an automated employment decision tool unless they prove they performed a bias audit beforehand. This law and others are among the ways of proving accountability in the hiring process, said Cliff Jurkiewicz, vice president of global strategy at Phenom, an AI recruiting vendor. "We must be accountable for the use of artificial intelligence, and the recommendations that it may be making in our decision-making," Jurkiewicz said on TechTarget Editorial's Targeting AI podcast. While accountability is needed, removing all bias in hiring and recruiting is almost certainly unattainable, Jurkiewicz said. "It is impossible to do that," he said. "It requires humans in the loop ... to be examining how these tools are functioning and being used in organizations." Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Shaun Sutner is a journalist with 35 years of experience, including 25 years as a reporter for daily newspapers. He is a senior news director for TechTarget Editorial's information management team, covering AI, unified communications software, analytics and data management technology. Together, they host the Targeting AI podcast.

Duration:00:49:57

B Corp Sama responds to AI data labeling criticism

1/29/2024
Data labeling and annotation vendor Sama seeks to make an impact not only in the tech market but also in parts of the world where it's hard for people to partake in the digital economy. As a women-led B Corporation chartered to do social and environmental good, Sama employs numerous people in countries such as Kenya and has created, said CEO Wendy Gonzalez on the latest episode of the Targeting AI podcast from TechTarget Editorial. She said the company has created more than 10,000 jobs in those regions. Yet Sama has faced intense criticism for paying substandard wages to workers in Africa and also subjecting them to inhumane work environments by requiring them to view and then label offensive and violent images. On the podcast, Gonzalez blamed some of the practices on its former client, generative AI giant OpenAI. She also argued that her company created decently paying jobs for people who otherwise would have trouble gaining employment. "It went beyond the boundaries of work that we were comfortable doing," Gonzalez said. "It was only in existence for a handful of months." Meanwhile, Sama's business mission is to help enterprises minimize the risk of AI model failure using its data annotating services. New multi-cloud integration Most recently, on Jan. 24, the vendor introduced a multi-cloud integration strategy in its platform to increase the speed of new project onboarding. The integration allows enterprises to keep their data on one of the three top cloud providers – AWS, Microsoft and Google -- while still giving Sama access to the data. It also enables faster onboarding to the Same platform and an integration suite compatible with Python SDKs and the Databricks platform. The integration reduces the cost of data egress because it eliminates the need for organizations to move data around in a multi-cloud model deployment, said Gartner analyst Sid Nag. "It speeds up application development via integration with other SDKs and programming language models while conforming to compliance and security models," Nag added. However, it's unclear how the Sama product gets access to the data contained in an organization's primary cloud provider, Nag continued. Ethics of data annotation and labeling While Sama has found success in the data annotation niche, it has navigated a turbulent history in Africa. Sama came under fire while performing contracted work for OpenAI in November 2021. On behalf of OpenAI, Sama hired data labelers in Kenya for a take-home pay of about $2 per hour. The labelers were charged with trying to remove toxic data from the training data sets of tools such as ChatGPT. However, some of the workers accused Sama of making them read sexually disturbing texts while paying them unfairly low wages. Although the work was beyond the norms of what Sama says it usually does in regions like Kenya, the incident still raised questions about the ethical implications of data labeling and what human workers are asked to do when removing toxic data from generative AI systems like ChatGPT. For Gonzalez, it has to do with the types of jobs available for workers like those in Kenya and how those workers can be a part of the digital economy. "If there were plentiful jobs, meaning you sort of take it or leave it, then that would be amazing," she said on the podcast "But that's not the situation. Being able to have people from around the world, globally in particular, the ones that have the greatest barriers to employment have access to the digital economy is important." Complete and effective data is also important, she continued. "You need a human in the loop to then validate that the AI or the model is interpreting that data as expected," Gonzalez said. "If it isn't, then you need to be able to flag that and then reflect and retrain that model." Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Shaun Sutner is a journalist with 34 years of experience,...

Duration:00:33:32

Examining Microsoft venture fund M12’s AI investment approach

1/16/2024
In the age of generative AI, Microsoft has become one of the lead investors after its massive investment in ChatGPT creator OpenAI. Since Microsoft's $13 billion investment in OpenAI, the AI market has seen changes including a tilt toward smaller and open source AI language models. Meanwhile, the tech giant's venture fund, M12, (which did not take part in the tech giant's deal with OpenAI) is still keeping its eye out for other AI startups that could be just as big as OpenAI. M12 seeks technologies that are new and transformative in the market, said partner Michael Stewart. "These are usually technologies where Microsoft does not have an existing large product," Stewart said on TechTarget Editorial's Targeting AI podcast. "[There's] less of a worry that Microsoft would be left behind in this unfolding story, as much as making sure they are aware of the most attractive, most competitive newest technologies that they could partner with." In the hot AI market, there are more opportunities for AI startups to partner with big tech companies via investments than in the past, Stewart added. "This is a very ripe environment for startups that have a partnership mindset to work with the majors," he said. It's also critical that AI startups looking for investment understand where the generative AI technology is going, even if they are not all incorporating the technology. Furthermore, startups must be willing to partner with investors and accept their input in the structure of their business model, Stewart said. "It's very difficult for me to accept that investors who are buying a portion of the company have no say or even protection of their own investment as the company grows," he said. "We do look critically at structures that are really intended to foil the influence of boards." Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Shaun Sutner is a senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Together, they host the Targeting AI podcast series.

Duration:00:44:37

Guiding generative AI toward responsible use

1/2/2024
When Juliette Powell and Art Kleiner started working on their book, The AI Dilemma: 7 Principles for Responsible Technology, generative AI had not yet exploded into the public consciousness. But after OpenAI released its blockbuster AI chatbot, ChatGPT, in October 2022, the co-authors went back to revise their narrative to accommodate the sudden emergence of a transformative force in business and society, one that needs guidelines and regulations for responsible use perhaps more than any other new software technology. "Now that we have generative AI in our hands … we also have to have the responsibility of how they will impact not just the people around us, but also the billions of people that are coming online every year who have no idea to what extent algorithms shape their lives," Powell said on the Targeting AI podcast from TechTarget Editorial. "So I feel like we have a larger responsibility." Powell, like Kleiner, with whom she is a partner in a tech consultancy, is an adjunct professor at New York University's Interactive Telecommunications Program. The authors' second principle, "Open the closed box," is about transparency and explainability -- the ability to look into AI systems and understand how they work and are trained, Kleiner said. "That doesn't just mean the algorithm, it means also the company that created it and the people who engineered it and the whole system of sociotechnical activity, people and processes and code that fits together and creates it," he said. Another of the principles at the core of the book is "people own their own data." "One of the things that human beings do is hold biases and assumptions, especially about other people. And that when it's frozen into an AI system has dramatic effect, particularly on vulnerable populations," Kleiner said. "We are our own data." The book is largely based on Powell's undergraduate thesis at Columbia University about the limits and possibilities in self-regulation of AI and drew on her consulting work at Intel. As for regulation of AI technology, Powell and Kleiner are proponents to the extent that it fosters responsible use of AI. "It's important that companies be held accountable," Powell said. "And I also think that it's incredibly important … for computer and systems engineers to actually be held accountable for their work, to actually be trained in responsible work ethics so that if people get harmed, there's actually some form of accountability." Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.

Duration:00:45:34

Looking ahead: 2024 will see generative AI mature

12/18/2023
With 2023 being the year for generative AI, 2024 will be the year the technology grows and develops. Many industry experts think that instead of the hype slowing, it will blossom. "In 2024, there will not be a trough of disillusionment with this tech, ever," said Mike Leone, an analyst at TechTarget's Enterprise Strategy Group, on the Targeting AI podcast from TechTarget Editorial. "We're jumping from hype to seeing productivity enhancements and improvements." However, the year will likely bring about many more AI models with both mature and immature enterprise capabilities. Enterprises may also see cost and regulation policies that could affect enterprise adoption of generative AI, Leone added. One development in the new year is a move away from large language models towards smaller models, said Usama Fayyad, executive director of The Institute for Experiential AI at Northeastern University. "[There will be] a realization that bigger is not necessarily better all the time," Fayyad said. "Having more parameters makes a model less portable, less maintainable, often unstable, requires a lot more data and a lot more guidance." Alternatively, smaller models are cheaper to train, maintain and revise, Fayyad added. Regulation will also continue to develop in 2024, said Ricardo Baeza-Yates, director of research at The Institute for Experiential AI. While the EU is already introducing AI policies, countries like China are expected to join in next year, Baeza-Yates said. There will also be a push toward "grey models" instead of black box models, he added. Black box models are models that are unexplainable, while with grey models, there's a level of understanding of how the models work. Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Shaun Sutner is a senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Together, they host the Targeting AI podcast series.

Duration:01:12:09

AI-assisted driving here long before autonomous vehicles

12/4/2023
Wide use of autonomous vehicles is far off in the hazy future. But truck and "last-mile" delivery van fleets serving online shoppers are already using advanced AI technology to guide drivers to their destinations safely. As Stefan Heck, CEO of Nauto, vendor of an AI-powered driver and fleet safety system, explained it on the Targeting AI podcast from TechTarget News, Nauto uses the same driving tools as autonomous vehicles, but leaves human drivers in charge. "We're not trying to replace the driver at all. We're a co-pilot or a guide, an advisor or safety warning system for the driver," Heck said on the podcast. "We use similar AI to what an autonomous vehicle does in terms of understanding what's happening." Nauto's predictive AI package uses sensors, a dual-facing camera, computer vision and neural network technology to see, understand and anticipate driving conditions in real time and issue verbal assist alerts to drivers if they take their eyes off the road or hands off the steering wheel or act sleepy. But unlike the tech in expensive autonomous vehicles, which are still largely in the testing phase and have run into serious safety and other operating problems in San Francisco and elsewhere, Nauto's system is more approachable at a cost of about $500 per vehicle. As for privacy considerations, while drivers are fully aware the AI system is there and can't turn it off while they're driving, Heck said the vendor tries to make it as non-intrusive as possible so drivers don't get annoyed. And the Nauto onboard box, mounted on the windshield, is polite, Heck argued. "It is an algorithm looking in real time for certain risks and behaviors only," he said. "We don't have an algorithm that says … 'Stefan's picking his nose today.' But we do look for, did you fall asleep? Did you not see the stop sign where you're not paying attention? Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.

Duration:00:40:37

Diving into Wayfair’s machine learning and AI odyssey

11/20/2023
Wayfair's machine learning strategy has been critical to its growth. The online furniture retailer's machine learning and AI journey started in 2013. "It was about 'We think we can do better business, make our dollars go longer if we actually optimize this toolkit,'" said Tulia Plumettaz, Wayfair's director of machine learning, during the Targeting AI podcast from TechTarget News. Wayfair started with putting machine learning technology to work to enhance its marketing. This meant using machine learning and AI technology to find the best medium to place its ads. Soon, the online retail giant was expanding its use of the technology to price algorithmically and understand how price changes will change demand. When Wayfair first engaged with AI, the company was mostly a "build shop," meaning it developed its AI and machine learning systems in-house, Plumettaz said. However, the company has since pivoted to a hybrid approach and started partnering with third-party vendors, notably Google Cloud. Wayfair has also tested generative AI technology from OpenAI, even though the company has historically been a Google shop, Plumettaz said. "We see the longevity of these partnerships as a mechanism of saying, 'Hey, we can use that to inform product,'" she said. "We see ourselves pretty much with a lot of vendors, as we want to be a partner as you're building your product rather than a transactional relation of, 'I buy a service from you.'" Regarding generative AI, the retailer has integrated the technology into products such as Decorify, a generative AI design tool. It is also incorporating the technology internally and in some sales operations. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Together, they host the "Targeting AI" podcast series.

Duration:00:37:40

Tech industry reaction to Biden’s AI executive order mixed

11/6/2023
The tech industry is dealing with the implications of an executive order on AI signed by President Joe Biden Oct. 30. The order aims to establish new standards for AI safety and security, while protecting the privacy of American citizens, promoting innovation and spurring development of responsible AI. "It's really looking at developing guidelines and best practices really across the whole field," said Katherine Hendrickson, a senior research lead at EpiSci, an AI military and aerospace software and hardware vendor, on the Targeting AI podcast from TechTarget News. While the order holds much promise for AI system developers, Hendrickson said its main value is its focus on research and the government partnering with research centers, while also appearing to fund a number of AI sectors. The order also shows how the federal government is promoting AI technology internally, said Forrester analyst Alla Valente. "From the language of this EO, what's clear is that the federal government is now being mandated to leverage AI, and then use that AI to improve how it does everything it does," she said. However, AI vendors in both the private and federal sectors should pay attention to the order, especially in the areas in which there is a call for standards in AI safety and security, Valente added. The executive order discusses the need for new standards to test AI, built on the National Institute of Standards and Technology's framework. "What the executive order is hoping to do is identify some of the risks as early as possible," Valente said. If that's accomplished, risk and security management practices can be embedded earlier in the development cycle of the AI lifecycle, she added. While the intent of the executive order is to create standards and safety guardrails around AI systems, the lack of actionable steps stood out to Gopi Polavarapu, chief solutions officer at Kore.ai. "From a vendor perspective, it's a welcome governance that's coming from the government, but at the end of the day, we need to know what those standards are, how that's going to be enforced," Polavarapu said. Kore.ai is a startup vendor of conversational AI tools for enterprises. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas.

Duration:00:48:58

Venture capital firm helps launch early AI startups

10/23/2023
The success of an AI startup depends on not only the technology and the problem the startup seeks to solve within the market, but also the support it has from investors and venture capital firms. One venture capital (VC) firm that prides itself on working closely with the founders of startups is Glasswing Ventures. As an early VC firm, Glasswing is focused on investing in AI-enabled companies and what it calls "frontier tech" B2B companies, according to Kleida Martiro, a partner at the company. "We have built strong convictions around certain areas where AI could really revolutionize certain industries," Martiro said during a Targeting AI podcast discussion. Those convictions have led Glasswing to create a mission oriented toward "connecting and protecting" building AI startups. When in the connect part of the mission, the VC firm looks for startups developing smart data infrastructure and automation, and vertical applications. The protect part is centered around security, which includes data governance and cybersecurity. Glasswing focuses on seed and pre-seed financing of startups in earliest stages. It guides startups by connecting them to customers, talent and more fundraising. "We serve as a true partner, we get involved as much as the startup wants us to get involved and we step aside when they don't need our help," Martiro said. "We're very much founder-first. They're part of our extended family." Startups working with Glasswing need to demonstrate that their technology addresses critical need in the market. The startups also need to start with real talent. "When investing at such an early stage, it really comes down to the team," Martiro said. "Backing a team that can execute, has the vision, has the technical chops and the technical skills very much married with the business ... and backing good people who are hustlers is truly what makes it at this stage." Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Together, they host the Targeting AI podcast series.

Duration:00:42:00

Generative AI and CX co-existing, carefully

10/10/2023
Customer experience chatbots that not only fail to deliver but also fall short of their human counterparts are the bane of CX designers' vision of an automated future. Now, the arrival of generative AI technology is promising to correct dysfunctional chatbots' missteps, ease the burden on overworked and underappreciated human customer service agents and satisfy frustrated consumers. But CX expert Don Fluckinger, a veteran tech journalist who has also worked as a CX industry analyst, casts a skeptical eye on claims made on behalf of generative AI and takes a cautionary view of automation and chatbots themselves. "Losing jobs is never all right," Fluckinger said on TechTarget News' Targeting AI podcast. "But would it be OK for generative AI to more effectively answer customer questions so that humans could monitor what it's doing and not spewing out deceptive or wrong information? That would be good." Many call centers already have AI-powered interactive voice response (IVR) systems, Fluckinger noted. And yet, these don't work all that well. "I've seen demos of these at conferences, on exhibition floors. I've read about them, but I have never run into it in real life yet," Fluckinger said. "The IVRs I hit are always pretty dumb." Meanwhile, better IVR systems could be on the horizon, and generative AI could help. Fluckinger noted, though, that while better call center and other CX platforms infused with generative AI technology are coming, they have to be tested and integrated with current systems. And, finally, companies have to buy the new technology. But the industry isn't there yet. Note: At the time this podcast was recorded, Fluckinger was a CX analyst for TechTarget's Enterprise Strategy Group. He now covers digital experience systems, end-user computing and the CPU/GPU market for TechTarget Editorial's news unit. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.

Duration:00:41:29

Global hiring tech company Oyster treads carefully with generative AI

9/25/2023
Oyster is keeping its distance from the generative AI craze, at least for now. When the vendor, whose platform helps companies with hiring, paying and managing employees in 180 countries around the world, recently came out with a new chatbot, Pearl, it fueled it with basic conversational AI, not the generative variety. That's largely because Oyster wanted to skirt generative AI's by now well-known risks of outputting inaccurate and biased information, said Michael McCormick, senior vice president of product and engineering at Oyster, on this week's episode of TechTarget Editorial's Targeting AI podcast. The vendor is a certified B Corporation with a mandate to focus on social and environmental performance. "One of the big problems with generative AI that everyone knows about is the tendency it can hallucinate," McCormick said. "We've seen examples of people resting control away from the intent of the generative AI programmers, and convincing the generative AI to do and say all sorts of awful things. "And there is not enough data capturing the experience of underserved and underrepresented groups," he added. "And so there's a huge amount of risk if you try to have guidance from systems like that in the HR space." Pearl is Oyster's first public foray into using AI to interact with users of its platform. Essentially, the chatbot answers, in conversational format, questions about hiring and remote employment regulations in a world of distributed work in dozens of far-flung countries. The chatbot is trained on Oyster's wealth of static information about global HR policies, taxes and benefits. So essentially it functions as a private large language model, with Oyster employees serving as "humans in the loop" to ensure that Pearl gives simple, consistent and accurate advice, thus further minimizing generative AI risk. "If you give an individual the ability to have a direct conversation with a generative AI, you give up control of what might happen," McCormick said. "And you're at the mercy of OpenAI or Bard or whomever in terms of how they try to control that." Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.

Duration:00:49:13

Digital assistant platform vendor talks AI disruption

9/11/2023
Much of the world became aware of generative AI and large language models with the release of Dall-E and ChatGPT last year, but Conversica CEO Jim Kaskade has known about the technology since 2019. During a walk with a top AI executive at Google, Kaskade said he learned about a lot about where the tech giant was heading with generative AI technology. Once he became CEO of the AI vendor specializing in digital assistants, he looked for ways to apply the technology in a way that was disruptive on the scale of earlier world-changing technologies. Kaskade's company's brand of disruption is conversational AI and the generative AI-powered digital assistants that he sees as an automated workforce that will eventually ease the burden of much menial work now done by humans. The application of LLMs in the form of OpenAI's ChatGPT and other similar systems has seen quick adoption worldwide compared to similarly disruptive technologies such as electricity, telephone communications and television, but not all organizations are comfortable with the technology. That uneasiness is analogous with the discussion in recent years about public cloud versus private and hybrid cloud, Kaskade said. "It's just a sequence of been there, done that," he said on Tech Target Editorial's Targeting AI podcast. "Once people get really comfortable with the amount of governance that's put around the public application [product], the public cloud solutions, then the big enterprises will start to move from private LLM to public LLM. It'll take the same period of time as it did with cloud." The more comfortable companies and people are with AI technology, the more benefits they can gain from it. "Look at what happened with the computer, the PC, look what happened with the phone, look what happened with the world wide web," Kaskade said. "AI is going to be more disruptive than any of those or all of them added together." Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the Targeting AI podcast series. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas.

Duration:00:43:07

AI facing ”turtles all the way down”

8/28/2023
AI technology has become a “turtles all the way" down problem. It's a dilemma in which AI technology is created to solve a particular problem. But in order to test the first AI tool, the tester has to use another AI technology, and then a third, and so on. According to Johna Till Johnson, CEO of advisory and IT consulting firm Nemertes Research, most enterprises try to avoid this problem by first providing proprietary data to the input of the first AI technology and testing the output, eliminating the need to have an AI tester test constantly. "The problem is, as you expand your AI outside of private data, the outputs can vary much more wildly," Johnson said during an interview on the Targeting AI podcast from TechTarget News. "You still need some form of AI to test the outputs and then you need some form of AI to test the AI that's testing the outputs, and you get your turtles all the way down again." Enterprises looking to get away from this endless feedback loop might need to stick with performing manual testing of the output of the initial AI technology, Johnson continued. Moreover, enterprises must ensure that the data they input into the technology from the beginning is trustworthy, she said. So using an AI tool like OpenAI's ChatGPT is not advisable. "ChatGPT has been abused horribly," Johnson said, adding that if the tool is used at her small business, it will need to be checked by a human, a time-costly activity. "If you think about the best use of ChatGPT at the moment, it's writing really bad term papers." Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the Targeting AI podcast series.

Duration:00:34:01

AI: good and evil in the movies, and in reality

8/14/2023
Whether AI is good and helpful or evil and dangerous is the stuff of endless debate in tech circles during this year's "generative AI moment." In the movies, though, it's been pretty consistent: AI is the kind of malevolent force as embodied the HAL 9000 computer in the 1968 sci-fi classic 2001: A Space Odyssey. But CX analyst Liz Miller of Constellation Research, who recently wrote a blog about AI and the movies and Salesforce, says AI should be seen as more like Meryl Streep's helpful assistant in the 2006 film The Devil Wears Prada. Andy, the human assistant played by Anne Hathaway, whispers useful information about a prospective customer in the Streep character's ear -- and Miller thinks we should let AI technology do the same. Indeed, it already is in some ways, in the form of digital assistants and generative AI-supported systems such as Microsoft's Copilot and Salesforce's various GPT tools. "There's this fallacy that AI was going to take everything over, when in reality what AI needed to do was take over the stuff that we did not have the capacity to do in the time that we had to do it," Miller said on TechTarget Editorial's Targeting AI podcast. "I think that's where we're starting to see AI take shape. And that's what I meant by that analogy," Miller added. "There's nothing wrong with HAL 9000. It's a great villain." Meanwhile, beyond AI and the movies, Miller touches on other topics during the podcast, including the fast-moving saga of the X social media platform (formerly known as Twitter). For her, the AI story there is not about X itself but about what happens with mercurial X owner Elon Musk's nascent AI venture, xAI. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.

Duration:00:49:33

Tesla, federal highway officials deficient on autonomous vehicle safety: an interview with industry analyst Sam Abuelsamid

7/31/2023
Sam Abuelsamid thinks Tesla's driver assist technology is unsafe. The mobility ecosystem analyst at Guidehouse Insights is a vocal critic of the electric vehicle giant's AI-powered "Autopilot" technology. A former mechanical engineer, automotive journalist and Ford and General Motors employee, Abuelsamid also charges that the National Highway Traffic Safety Administration (NHTSA) has grossly undervalued safety considerations for self-driving and partially self-driving vehicles. While Abuelsamid acknowledges that Tesla has advanced society's views on driving technology by appealing to consumers and popularizing electric vehicles, he also refuses to call such vehicles "autonomous." Instead, he refers to them as "automated," because, as he points out, few fully driverless vehicles are on the road. In addition, Abuelsamid contends that Tesla has tried to do safety "on the cheap" by relying on cameras only to power Autopilot features and not using considerably more expensive sensor arrays. "I think they've been utterly reckless and irresponsible in their approach to automated driving by putting experimental software in the hands of average consumers who are not trained in how to properly test and evaluate this kind of safety critical software," Abuelsamid says Meanwhile, autonomous vehicle technology vendors including Cruise, Waymo, Zoox and Motional are using multiple types of sensors, he says. One Tesla fan and investor, Ross Gerber, CEO of Gerber Kawasaki Wealth and Investment Management, has disputed Tesla safety critics. He argues that autonomously driven Teslas will get increasingly safer with hundreds of thousands of consumers driving and testing out the beta version of the popular carmaker's full self-driving capability. But Abuelsamid faults NHTSA for failing to effectively oversee safety aspects of autonomous vehicle technology vendors. "I think the National Highway Traffic Safety Administration has been negligent in not doing more to require sharing of data from these test vehicles to build an understanding of how these things function," he says. "At a minimum what we need is the electronic equivalent of what we have to do as humans to get a driver's license." Go to TechTarget News for reports on autonomous vehicle technology and other AI developments.

Duration:00:01:00

An Interview with AI Expert Michael Bennett of Northeastern University

7/13/2023
Our guest is Michael Bennett, director of education curriculum and business lead for responsible AI at the Institute for Experiential AI at Northeastern University. Bennett, a practicing lawyer, holds a law degree from Harvard Law School and a PhD from Rensselaer Polytechnic University in Philosophy -- Science, Technology and Society. Bennett is also an occasional TechTarget contributing writer. During the 45-minute episode, Bennett discusses the impact of New York City's new Law 144 governing the use of AI in automated employment decision tools, which he helped draft before it went into effect on July 5, 2023. The local law is likely to have a wide-reaching effect on employers across the U.S. if only because a large number of corporations are based in or have a significant presence in the country's largest city, Bennett says. The law prohibits "employers and employment agencies from using an automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates." Law 144 has already spun off a thriving new niche of law and audit firms providing services to employers to comply with the measure, Bennett also zeroes in on the hottest topic in the tech world at the moment: generative AI. He talks about various efforts, including projects he's involved in, to rein in, regulate and harness for effective use large language models and the AI chatbots such as ChatGPT and Google Bard that have become ubiquitous in the business and consumer spheres over the last year. On another front, AI and the arts, Bennett discusses the latest developments in copyright law as it relates to AI and also touches on the Hollywood TV writers strike and writers' concerns about generative AI systems taking over their jobs. Podcast intro/outro music by Six Umbrellas: "Joker." This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Duration:00:46:17