Targeting AI-logo

Targeting AI

Technology Podcasts

Hosts Shaun Sutner, TechTarget News senior news director, and AI news writer Esther Ajao interview AI experts from the tech vendor, analyst and consultant community, academia and the arts as well as AI technology users from enterprises and advocates for data privacy and responsible use of AI. Topics are related to news events in the AI world but the episodes are intended to have a longer, more ”evergreen” run and they are in-depth and somewhat long form, aiming for 45 minutes to an hour in duration. The podcast will occasionally host guests from inside TechTarget and its Enterprise Strategy Group and Xtelligent divisions as well and also include some news-oriented episodes featuring Sutner and Ajao reviewing the news.

Location:

United States

Description:

Hosts Shaun Sutner, TechTarget News senior news director, and AI news writer Esther Ajao interview AI experts from the tech vendor, analyst and consultant community, academia and the arts as well as AI technology users from enterprises and advocates for data privacy and responsible use of AI. Topics are related to news events in the AI world but the episodes are intended to have a longer, more ”evergreen” run and they are in-depth and somewhat long form, aiming for 45 minutes to an hour in duration. The podcast will occasionally host guests from inside TechTarget and its Enterprise Strategy Group and Xtelligent divisions as well and also include some news-oriented episodes featuring Sutner and Ajao reviewing the news.

Language:

English


Episodes
Ask host to enable sharing for playback control

AWS GenAI strategy based on multimodel ecosystem, plus Titan, Q and Bedrock

7/15/2024
AWS is quietly building a generative AI ecosystem in which its customers can use many large language models from different vendors, or choose to employ the tech giant's own models, Q personal assistants, GenAI platforms and Trainium and Inferentia AI chips. AWS says it has more than130,000 partners, and hundreds of thousands of AWS customers use AWS AI and machine learning services. The tech giant provides not only the GenAI tools, but also the cloud infrastructure that undergirds GenAI deployment in enterprises. "We believe that there's no one model that's going to meet all the customer use cases," said Rohan Karmarkar, managing director of partner solutions architecture at AWS, on the Targeting AI podcast from TechTarget Editorial. "And if the customers want to really unlock the value, they might use different models or a combination of different models for the same use case." Customers find and deploy the LLMs on Amazon Bedrock, the tech giant's GenAI platform. The models are from leading GenAI vendors such as Anthropic, AI21 Labs, Cohere, Meta, Mistral and Stability AI, and also include models from AWS' Titan line. Karmarkar said AWS differentiates itself from its hyperscaler competitors, which all have their own GenAI systems, with an array of tooling needed to implement GenAI applications as well as AI GPUs from AI hardware giant Nvidia and AWS' own custom silicon infrastructure. AWS also prides itself on its security technology and GenAI competency system that pre-vets and validates partners' competencies in putting GenAI to work for enterprise applications. The tech giant is also agnostic on the question of proprietary versus open source and open models, a big debate in the GenAI world at the moment. "There's no one decision criteria. I don't think we are pushing one [model] over another," Karmarkar said. "We're seeing a lot of customers using Anthropic, the Claude 3 model, which has got some of the best performance out there in the industry." "It's not an open source model, but we've also seen customers use Mistral and [Meta] Llama, which have much more openness," he added. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 35 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. They co-host the Targeting AI podcast.

Duration:00:21:35

Ask host to enable sharing for playback control

Walmart uses generative AI for payroll, employee experience

7/1/2024
The biggest global retailer sees itself as a tech giant. And with 25,000 engineers and its own software ecosystem, Walmart isn't waiting to see how GenAI technology will play out. The company is already providing its employees -- referred to by the retailer as associates -- with in-house GenAI tools such as the My Assistant conversational chatbot. Associates can use the consumer-grade ChatGPT-like tool to frame a press release, write out guiding principles for a project, or for whatever they want to accomplish. "What we're finding is as we teach our business partners what is possible, they come up with an endless set of use cases," said David Glick, senior vice president of enterprise business services at Walmart, on the Targeting AI podcast from TechTarget Editorial. Another point of emphasis for Walmart and GenAI is associate healthcare insurance claims. Walmart built a summarization agent that has reduced the time it takes to process complicated claims from a day or two to an hour or two, Glick said. An important area in which Glick is implementing GenAI technology is in payroll. "What I consider our most sacrosanct duty is to pay our associates accurately and timely," he said. Over the years, humans have monitored payroll. Now GenAI is helping them. "We want to scale up AI for anomaly detection so that we're looking at where we see things that might be wrong," Glick said. "And how do we have someone investigate and follow up on that." Meanwhile, as for the "build or buy" dilemma, Walmart tends to come down on the build side. The company uses a variety of large language models and has built its own machine learning platform, Element, for them to sit atop. "The nice thing about that is that we can have a team that's completely focused on what is the best set of LLMs to use," Glick said. "We're looking at every piece of the organization and figuring out how can we support it with generative AI." Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. They co-host the Targeting AI podcast.

Duration:00:23:55

Ask host to enable sharing for playback control

Lenovo stakes claim to generative AI at the edge

6/17/2024
While Apple garnered wide attention for its recent embrace of generative AI for iPhones and Macs, rival end point device maker Lenovo already had a similar strategy in place. The multinational consumer products vendor, based in China, is known for its ThinkPad line of laptops and for mobile phones made by its Motorola subsidiary. But Lenovo also has for a few years been advancing a “pocket to cloud” approach to computing. That strategy now includes GenAI capabilities residing on smartphones, AI PCs and laptops and more powerful cloud processing power in Lenovo data centers and customers’ private clouds. Since OpenAI’s ChatGPT large language model (LLM) disrupted the tech world in November 2022, GenAI systems have largely been cloud-based. Queries from edge devices run a GenAI prompt in the cloud, which returns the output to the user’s device. Lenovo’s strategy -- somewhat like Apple’s new one -- is to flip that paradigm and locate GenAI processing at the edge, routing outbound prompts to the data center or private cloud when necessary. The benefits include security, privacy, personalization and lower latency -- resulting in faster LLM responses and reducing the need for expensive compute, according to Lenovo. “Running these workloads at edge, on device, I'm not taking potentially proprietary IP and pushing that up into the cloud and certainly not the public cloud,” said Tom Butler, executive director, worldwide communication commercial portfolio at Lenovo, on the Targeting AI podcast from TechTarget Editorial. The edge devices that Lenovo talks about aren’t limited to the ones in your pocket and on your desk. They also include remote cameras and sensors in IoT AI applications such as monitoring manufacturing processes and facility security. “You have to process this data where it's created,” said Charles Ferland, vice president, general manager of edge computing at Lenovo, on the podcast. “And that is running on edge devices that are deployed in a gas station, convenience store, hospital, clinics -- wherever you want.” Meanwhile, Lenovo in recent months rolled out partnerships with some big players in GenAI including Nvidia and Qualcomm. The vendor is also heavily invested in working with neural processing units, or NPUs, in edge devices and innovative cooling systems for AI servers in its data centers. Shaun Sutner is a journalist with 35 years of experience, including 25 years as a reporter for daily newspapers. He is a senior news director for TechTarget Editorial's information management team, covering AI, analytics and data management technology. Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Together, they host the Targeting AI podcast.

Duration:00:42:41

Ask host to enable sharing for playback control

The importance of open source in GenAI

5/31/2024
The rise of generative AI has also brought renewed interest and growth in open source technology. But the question of open source is still "open" in generative AI. Sometimes, the code is open -- other times, the training data and weights are open. A leader in the open source large language model arena is Meta. However, despite the popularity of the social media's giant's Llama family of large language models (LLMs), some say Meta's LLMs are not fully open source. One vendor that built on top of Llama is Lightning AI. LightningAI is known for PyTorch Lightning, an open source Python library that provides a high level of support for PyTorch, a deep learning framework. Lightning in March rolled out Thunder, a source-to-source compiler for PyTorch. Thunder speeds up training and serves generative AI (GenAI) models across multiple GPUs. In April 2023, Lightning introduced Lit-Llama. The vendor created the Lit-Llama model starting with code from NanoGPT (a small-scale GPT for text generation created by Andrej Karpathy, a co-founder of OpenAI and former director of AI at Tesla). Lit-Llama is a fully open implementation of Llama source code, according to Lightning. Being able to create on top of Llama highlights the importance of "hackable" technology, Lightning AI CTO Luca Antiga said on the Targeting AI podcast from TechTarget Editorial. "The moment it's hackable is the moment people can build on top of it," Antiga said. However, mechanisms of open source are yet to be fully developed in GenAI technology, Antiga continued. It's also unlikely that open source models will outperform proprietary models. "Open source will tend to keep model size low and more and more capable, which is really enabling and really groundbreaking, and closed source will try to win out by scaling out, probably," Antiga said. "It's a very nice race." Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.

Duration:00:50:06

Ask host to enable sharing for playback control

A vision of an AI future that may not include humans

5/20/2024
In intellectual tech circles, a debate over artificial general intelligence and the AI future is raging. Dan Faggella is in the middle of this highly charged discussion, arguing on various platforms that artificial general intelligence (AGI) will be here sooner than many people think, and it will likely take the place of human civilization. "It is most likely, in my opinion, that should we have AGI, it won't follow too long from there that humanity would be attenuated. So, we would fade out," Faggella said on the Targeting AI podcast from TechTarget Editorial. "The bigger question is how do we fade out? Is it friendly? Is it bad?" he said. "I don't think we'll have much control, by the way, but I think maybe we could try to make sure that we've got a nice way of bowing out." In addition to his role as an AI thinker, Faggella is a podcaster and founder and CEO of AI research and publishing firm Emerj Artificial Intelligence Research. In the podcast episode, Faggella touches on a wide range of subjects beyond the long-term AI future. He takes on election deepfakes (probably not as dangerous as feared, and the tech could also be used for good) and AI regulation (there should be the right amount of it), as well as robots and how generative AI models will soon become an integral part of daily life. "The constant interactions with these machines will be a wildly divergent change in the human experience," Faggella said. "I do suspect absolutely, fully and completely that most of us will have some kind of agent that we're able to interact with all the time. Meanwhile, Faggella has put forth a vision of what an AGI-spawned "worthy successor" to humans could look like in the AI future. He has written about the worthy successor as "an entity with more capability, intelligence, ability to survive and (subsequently) moral value than all of humanity." On the podcast, he talked about a future inhabited by a post-human incarnation of AI. "Keeping the torch of life alive would mean a post-human intelligence that could go populate galaxies, that could maybe escape into other dimensions, that could visit vastly different portions of space that we don't currently understand," he said. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Together, they host the Targeting AI podcast.

Duration:00:47:04

Ask host to enable sharing for playback control

Salesforce open model AI strategy aims for trust, automation

5/6/2024
Salesforce was an early adopter of generative AI, seizing on large language model technology from OpenAI to integrate into its own applications. But the CRM and CX giant quickly evolved an open model strategy. It now gives customers access to multiple third-party LLMs while providing its own AI trust layer to try to ensure that Salesforce users can safely rely on AI-generated outputs. Jayesh Govindarajan, senior vice president at Salesforce AI, calls this approach "BYOLLLM," or bring your own LLLM. "The Salesforce LLM strategy is to provide an open-model ecosystem for our customers," Govindarajan said on the Targeting AI podcast from TechTarget Editorial. "Salesforce-developed models are, of course, available out of the box on the AI stack, but customers can also bring their own LLMs. And to support this level of choice and diversity, the trust layer is model-agnostic," he continued. As befits its core customer base, Salesforce sees sales, marketing and customer service applications as most ripe for generative AI, and that is where the vendor is focusing on the technology as a productivity engine, Govindarajan said. Similar conversations, whether taking place in email or other messaging formats, can be automated with generative AI so the technology is embedded in daily workflows. An example Govindarajan cited is using generative AI to let a marketing person easily make a marketing campaign multilingual. "How do we make a customer service person more efficient? How do we make a rock star salesperson 10 times more successful? How do we make a marketing manager create campaigns that convert really well?" Govindarajan said. "It's not easy to do that. You want to do it with safety, security, and trust," he said. "As you know, the systems can go off. So, you want to have the right guardrails in place to be able to shape it into the right form." Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Duration:00:36:26

Ask host to enable sharing for playback control

Examining developers' perception of AI tools at GitHub

4/22/2024
The explosive popularity of generative AI has been accompanied by the question of whether developers are finding great uses for the new technology. While the hype around GenAI has grown, the perception of its usefulness for developers has changed. "Developers are eager to kind of embrace AI more into their complex tasks, but not for every part, and they're not open to the same degree," GitHub researcher Eirini Kalliamvakou said on the Targeting AI podcast from TechTarget Editorial. On Jan. 17, Kalliamvakou released new findings that showed the evolution of developers' expectations of and perspectives on AI tools. For many developers, GenAI tools are like a second brain and serve mainly to reduce some of the cognitive burden they feel performing certain tasks. Cognitive burden in coding is produced by tasks that require more energy than developers would like to invest. "They feel that it is not worth their time," Kalliamvakou said. "This is a sort of task that is ripe for automation." Many developers are also using AI tools to quickly make sense of a lot of information and understand the context of what they need to do. While many developers find AI tools helpful, others experience AI skepticism, she added. Developers who are skeptical about AI had tried AI tools and were not satisfied. "They felt the tools are not good enough," Kalliamvakou continued. This is because the tools sometimes gave inaccurate responses and were not helpful. "What they were saying was AI [tools] at the moment, they cannot be trusted, they cannot give ground truths," she said. The two groups of developers are important to keep in mind for GitHub and other AI vendors creating tools that developers will use. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.

Duration:00:38:51

Ask host to enable sharing for playback control

Musicians and the fight for fairness in the age of GenAI

4/8/2024
The growth of generative AI technology has led to concerns about the data AI technology companies use to train their systems. Authors, journalists and now musicians have accused generative AI vendors of using copyrighted material to train large language models. More than 200 musicians signed an open letter released Tuesday by the Artists Rights Alliance calling on AI developers to stop their "assault on human creativity." While the artists argue that responsible use of generative AI technology could help the music industry, they also maintain that irresponsible use could threaten the livelihoods of many. The problem is permissions, said Jenn Anderson-Miller, co-founder and CEO of music licensing firm Audiosocket, on the Targeting AI podcast from TechTarget Editorial. "It's widely understood that a lot of these training models have trained on copyrighted material without the permission of the rights holders," Anderson-Miller said. While it's true that the musicians did not produce evidence of how their works have been infringed on, generative AI vendors such as OpenAI have failed to prove that they didn't infringe on copyrighted works, she said. For Anderson-Miller, one solution to the problem is creating a collaborative effort with musicians that would include licensing. As a company that represents more than 3,000 artists, Audiosocket recently inserted an AI clause in its artist agreement. In the clause, Audiosocket defined traditional and generative AI and said it plans to support the ecosystem of traditional AI. "We don't see this as directly threatening our artists," Anderson-Miller said. "We see this as, if anything, it's helping our artists." Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.

Duration:00:37:32

Ask host to enable sharing for playback control

Security, bias risks are inherent in GenAI black box models

3/25/2024
From bias to hallucinations, it is apparent that generative AI models are far from perfect and present risks. Most recently, tech giants -- notably Google -- have run into trouble after their models made egregious mistakes that reflect the inherent problem with the data sets upon which large language models (LLMs) are based. Microsoft faced criticism when its models from partner OpenAI generated disturbing images of monsters and women. The problem is due to the architecture of the LLMs, according to Gary McGraw, co-founder of the Berryville Institute of Machine Learning. Because most foundation models are a black box that contain security flaws within their architecture, users have little ability to manage the risks, McGraw said on the Targeting AI podcast from TechTarget Editorial. In January, the Berryville Institute published a report highlighting some risks associated with LLMs, including data debt, prompt manipulation and recursive pollution. "These are some risks that need to be thought about while you're building your LLM application so that you don't put your business, your enterprise, your business, at more risk than you want to take on when you adopt this technology," McGraw said. The risks are embedded in both closed and open source models and small and large language models, he added. "When people build their own language model, what they're often doing ... is taking a foundation model that's already developed and they're training it a little bit further with their own proprietary prompting," he continued. "These steps do not eradicate the risks that are built into the black box. In fact, all they do is hide them even further." These risks can be dangerous for real-world situations such as the 2024 election, McGraw said. Since the language models are built from data from all over the web -- both good and unreliable -- LLMs trained on that data can be used to produce false and malicious information about the election. "Using this technology, we need some way of controlling the output so that it doesn't get back out there into the world and just cause more confusion among people who don't know which way is up," he said. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.

Duration:00:37:20

Ask host to enable sharing for playback control

A look at independent AI hardware and software vendor SambaNova's open source strategy

3/11/2024
AI hardware and software provider SambaNova Systems seeks to put enterprise customers in charge of their data while using open source models. A smaller competitor of AI hardware vendor Nvidia, the AI vendor is trying to distinguish itself by helping enterprises train and deploy large models that they can't train on Nvidia's systems. "What we try to focus on is how do we actually create a hardware platform that allows these companies to take these hard problems where the models are really big and deploy them in a reasonable way," co-founder and CEO Rodrigo Liang said on the Targeting AI podcast from TechTarget Editorial. One way the vendor does this is by focusing on open source models. "What we decided to do some years ago was [go] fully into open source," Liang said. "We want to open the model so that everybody at any given point in time can look at the entire model and how it was trained." SambaNova introduced Sambaverse on March 6. In SambaNova's terms, Sambaverse is a playground and API where developers can test available open source large language models from a single endpoint and compare their responses for any given application. The new playground comes one week after the vendor unveiled Samba-1, a trillion-parameter generative AI model for the enterprise. The model comprises more than 50 open source generative AI models. Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Shaun Sutner is a journalist with 35 years of experience, including 25 years as a reporter for daily newspapers. He is a senior news director for TechTarget Editorial's information management team, covering AI, unified communications software, analytics and data management technology. Together, they host the Targeting AI podcast.

Duration:00:50:43

Ask host to enable sharing for playback control

It's looking like 2024 is the year of ROI for generative AI

2/26/2024
Generative AI vendors and investors have turned their attention from last year's innovative frenzy to ROI, monetizing the language models that have revolutionized the tech world in a short time. That's the outlook on 2024 from Kashyap Kompella, founder and analyst at RPA2AI Research, who was a guest on the Targeting AI podcast from TechTarget Editorial. "If we think about it, 2023 really was the year of shock and awe for AI technology," Kompella said on the podcast. "But I think in 2024, there is going to be some amount of focus -- if not sole focus -- on return on investment." At the same time, the tech landscape is seeing in 2024 an astonishing profusion of AI language models, from the ever-expanding power of large language models (LLMs) to the rise of small and open source models, and even models adapted for mobile devices, Kompella noted. "The burst of technological innovation will continue," he said. Investors looking at generative AI tech vehicles to pump venture funds into are hoping to hit "pay dirt" this year, as Kompella put it. "But the businesses and the organizations that are looking to implement AI systems, they're going to be also focused on business value and return on investment," he said. Meanwhile, 2024 is seeing a continuation and even ramping up of the litigation surrounding generative AI systems. There is also a growing emphasis on making generative AI systems safe by attempting to reduce or eliminate bias and inaccurate outputs. Everyone from comedian and author Sarah Silverman and best-selling novelist John Grisham to The New York Times are suing generative AI vendors for misappropriating their work. "Businesses are … becoming aware of some of the risks of using the AI systems." Kompella said. "So we'll see more indemnity clauses being offered by AI vendors." Looking at the swelling generative AI market, Kompella also noted that venture capital activity in the arena is accelerating after a strong year in 2023. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.

Duration:00:50:00

Ask host to enable sharing for playback control

AI, hiring and the need for humans in the loop

2/12/2024
The fear of AI technology eliminating thousands of jobs or affecting the hiring process continues to prevail in the age of generative AI. While many believe that AI technology will augment workers, some are already seeing the effect of AI in the job market. Indeed, tech companies and other large enterprises have laid off thousands of workers in recent months, though staffing levels are mostly still higher than before the COVID-19 pandemic. ResumeBuilder.com found in a November 2023 survey that of 750 business leaders, 44% reported AI technology would cause layoffs in 2024. The presence of AI in the hiring process has also led to laws like New York's Local Law 144. It prevents employers from using an automated employment decision tool unless they prove they performed a bias audit beforehand. This law and others are among the ways of proving accountability in the hiring process, said Cliff Jurkiewicz, vice president of global strategy at Phenom, an AI recruiting vendor. "We must be accountable for the use of artificial intelligence, and the recommendations that it may be making in our decision-making," Jurkiewicz said on TechTarget Editorial's Targeting AI podcast. While accountability is needed, removing all bias in hiring and recruiting is almost certainly unattainable, Jurkiewicz said. "It is impossible to do that," he said. "It requires humans in the loop ... to be examining how these tools are functioning and being used in organizations." Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Shaun Sutner is a journalist with 35 years of experience, including 25 years as a reporter for daily newspapers. He is a senior news director for TechTarget Editorial's information management team, covering AI, unified communications software, analytics and data management technology. Together, they host the Targeting AI podcast.

Duration:00:49:57

Ask host to enable sharing for playback control

B Corp Sama responds to AI data labeling criticism

1/29/2024
Data labeling and annotation vendor Sama seeks to make an impact not only in the tech market but also in parts of the world where it's hard for people to partake in the digital economy. As a women-led B Corporation chartered to do social and environmental good, Sama employs numerous people in countries such as Kenya and has created, said CEO Wendy Gonzalez on the latest episode of the Targeting AI podcast from TechTarget Editorial. She said the company has created more than 10,000 jobs in those regions. Yet Sama has faced intense criticism for paying substandard wages to workers in Africa and also subjecting them to inhumane work environments by requiring them to view and then label offensive and violent images. On the podcast, Gonzalez blamed some of the practices on its former client, generative AI giant OpenAI. She also argued that her company created decently paying jobs for people who otherwise would have trouble gaining employment. "It went beyond the boundaries of work that we were comfortable doing," Gonzalez said. "It was only in existence for a handful of months." Meanwhile, Sama's business mission is to help enterprises minimize the risk of AI model failure using its data annotating services. New multi-cloud integration Most recently, on Jan. 24, the vendor introduced a multi-cloud integration strategy in its platform to increase the speed of new project onboarding. The integration allows enterprises to keep their data on one of the three top cloud providers – AWS, Microsoft and Google -- while still giving Sama access to the data. It also enables faster onboarding to the Same platform and an integration suite compatible with Python SDKs and the Databricks platform. The integration reduces the cost of data egress because it eliminates the need for organizations to move data around in a multi-cloud model deployment, said Gartner analyst Sid Nag. "It speeds up application development via integration with other SDKs and programming language models while conforming to compliance and security models," Nag added. However, it's unclear how the Sama product gets access to the data contained in an organization's primary cloud provider, Nag continued. Ethics of data annotation and labeling While Sama has found success in the data annotation niche, it has navigated a turbulent history in Africa. Sama came under fire while performing contracted work for OpenAI in November 2021. On behalf of OpenAI, Sama hired data labelers in Kenya for a take-home pay of about $2 per hour. The labelers were charged with trying to remove toxic data from the training data sets of tools such as ChatGPT. However, some of the workers accused Sama of making them read sexually disturbing texts while paying them unfairly low wages. Although the work was beyond the norms of what Sama says it usually does in regions like Kenya, the incident still raised questions about the ethical implications of data labeling and what human workers are asked to do when removing toxic data from generative AI systems like ChatGPT. For Gonzalez, it has to do with the types of jobs available for workers like those in Kenya and how those workers can be a part of the digital economy. "If there were plentiful jobs, meaning you sort of take it or leave it, then that would be amazing," she said on the podcast "But that's not the situation. Being able to have people from around the world, globally in particular, the ones that have the greatest barriers to employment have access to the digital economy is important." Complete and effective data is also important, she continued. "You need a human in the loop to then validate that the AI or the model is interpreting that data as expected," Gonzalez said. "If it isn't, then you need to be able to flag that and then reflect and retrain that model." Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Shaun Sutner is a journalist with 34 years of experience,...

Duration:00:33:32

Ask host to enable sharing for playback control

Examining Microsoft venture fund M12’s AI investment approach

1/16/2024
In the age of generative AI, Microsoft has become one of the lead investors after its massive investment in ChatGPT creator OpenAI. Since Microsoft's $13 billion investment in OpenAI, the AI market has seen changes including a tilt toward smaller and open source AI language models. Meanwhile, the tech giant's venture fund, M12, (which did not take part in the tech giant's deal with OpenAI) is still keeping its eye out for other AI startups that could be just as big as OpenAI. M12 seeks technologies that are new and transformative in the market, said partner Michael Stewart. "These are usually technologies where Microsoft does not have an existing large product," Stewart said on TechTarget Editorial's Targeting AI podcast. "[There's] less of a worry that Microsoft would be left behind in this unfolding story, as much as making sure they are aware of the most attractive, most competitive newest technologies that they could partner with." In the hot AI market, there are more opportunities for AI startups to partner with big tech companies via investments than in the past, Stewart added. "This is a very ripe environment for startups that have a partnership mindset to work with the majors," he said. It's also critical that AI startups looking for investment understand where the generative AI technology is going, even if they are not all incorporating the technology. Furthermore, startups must be willing to partner with investors and accept their input in the structure of their business model, Stewart said. "It's very difficult for me to accept that investors who are buying a portion of the company have no say or even protection of their own investment as the company grows," he said. "We do look critically at structures that are really intended to foil the influence of boards." Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Shaun Sutner is a senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Together, they host the Targeting AI podcast series.

Duration:00:44:37

Ask host to enable sharing for playback control

Guiding generative AI toward responsible use

1/2/2024
When Juliette Powell and Art Kleiner started working on their book, The AI Dilemma: 7 Principles for Responsible Technology, generative AI had not yet exploded into the public consciousness. But after OpenAI released its blockbuster AI chatbot, ChatGPT, in October 2022, the co-authors went back to revise their narrative to accommodate the sudden emergence of a transformative force in business and society, one that needs guidelines and regulations for responsible use perhaps more than any other new software technology. "Now that we have generative AI in our hands … we also have to have the responsibility of how they will impact not just the people around us, but also the billions of people that are coming online every year who have no idea to what extent algorithms shape their lives," Powell said on the Targeting AI podcast from TechTarget Editorial. "So I feel like we have a larger responsibility." Powell, like Kleiner, with whom she is a partner in a tech consultancy, is an adjunct professor at New York University's Interactive Telecommunications Program. The authors' second principle, "Open the closed box," is about transparency and explainability -- the ability to look into AI systems and understand how they work and are trained, Kleiner said. "That doesn't just mean the algorithm, it means also the company that created it and the people who engineered it and the whole system of sociotechnical activity, people and processes and code that fits together and creates it," he said. Another of the principles at the core of the book is "people own their own data." "One of the things that human beings do is hold biases and assumptions, especially about other people. And that when it's frozen into an AI system has dramatic effect, particularly on vulnerable populations," Kleiner said. "We are our own data." The book is largely based on Powell's undergraduate thesis at Columbia University about the limits and possibilities in self-regulation of AI and drew on her consulting work at Intel. As for regulation of AI technology, Powell and Kleiner are proponents to the extent that it fosters responsible use of AI. "It's important that companies be held accountable," Powell said. "And I also think that it's incredibly important … for computer and systems engineers to actually be held accountable for their work, to actually be trained in responsible work ethics so that if people get harmed, there's actually some form of accountability." Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.

Duration:00:45:34

Ask host to enable sharing for playback control

Looking ahead: 2024 will see generative AI mature

12/18/2023
With 2023 being the year for generative AI, 2024 will be the year the technology grows and develops. Many industry experts think that instead of the hype slowing, it will blossom. "In 2024, there will not be a trough of disillusionment with this tech, ever," said Mike Leone, an analyst at TechTarget's Enterprise Strategy Group, on the Targeting AI podcast from TechTarget Editorial. "We're jumping from hype to seeing productivity enhancements and improvements." However, the year will likely bring about many more AI models with both mature and immature enterprise capabilities. Enterprises may also see cost and regulation policies that could affect enterprise adoption of generative AI, Leone added. One development in the new year is a move away from large language models towards smaller models, said Usama Fayyad, executive director of The Institute for Experiential AI at Northeastern University. "[There will be] a realization that bigger is not necessarily better all the time," Fayyad said. "Having more parameters makes a model less portable, less maintainable, often unstable, requires a lot more data and a lot more guidance." Alternatively, smaller models are cheaper to train, maintain and revise, Fayyad added. Regulation will also continue to develop in 2024, said Ricardo Baeza-Yates, director of research at The Institute for Experiential AI. While the EU is already introducing AI policies, countries like China are expected to join in next year, Baeza-Yates said. There will also be a push toward "grey models" instead of black box models, he added. Black box models are models that are unexplainable, while with grey models, there's a level of understanding of how the models work. Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Shaun Sutner is a senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Together, they host the Targeting AI podcast series.

Duration:01:12:09

Ask host to enable sharing for playback control

AI-assisted driving here long before autonomous vehicles

12/4/2023
Wide use of autonomous vehicles is far off in the hazy future. But truck and "last-mile" delivery van fleets serving online shoppers are already using advanced AI technology to guide drivers to their destinations safely. As Stefan Heck, CEO of Nauto, vendor of an AI-powered driver and fleet safety system, explained it on the Targeting AI podcast from TechTarget News, Nauto uses the same driving tools as autonomous vehicles, but leaves human drivers in charge. "We're not trying to replace the driver at all. We're a co-pilot or a guide, an advisor or safety warning system for the driver," Heck said on the podcast. "We use similar AI to what an autonomous vehicle does in terms of understanding what's happening." Nauto's predictive AI package uses sensors, a dual-facing camera, computer vision and neural network technology to see, understand and anticipate driving conditions in real time and issue verbal assist alerts to drivers if they take their eyes off the road or hands off the steering wheel or act sleepy. But unlike the tech in expensive autonomous vehicles, which are still largely in the testing phase and have run into serious safety and other operating problems in San Francisco and elsewhere, Nauto's system is more approachable at a cost of about $500 per vehicle. As for privacy considerations, while drivers are fully aware the AI system is there and can't turn it off while they're driving, Heck said the vendor tries to make it as non-intrusive as possible so drivers don't get annoyed. And the Nauto onboard box, mounted on the windshield, is polite, Heck argued. "It is an algorithm looking in real time for certain risks and behaviors only," he said. "We don't have an algorithm that says … 'Stefan's picking his nose today.' But we do look for, did you fall asleep? Did you not see the stop sign where you're not paying attention? Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.

Duration:00:40:37

Ask host to enable sharing for playback control

Diving into Wayfair’s machine learning and AI odyssey

11/20/2023
Wayfair's machine learning strategy has been critical to its growth. The online furniture retailer's machine learning and AI journey started in 2013. "It was about 'We think we can do better business, make our dollars go longer if we actually optimize this toolkit,'" said Tulia Plumettaz, Wayfair's director of machine learning, during the Targeting AI podcast from TechTarget News. Wayfair started with putting machine learning technology to work to enhance its marketing. This meant using machine learning and AI technology to find the best medium to place its ads. Soon, the online retail giant was expanding its use of the technology to price algorithmically and understand how price changes will change demand. When Wayfair first engaged with AI, the company was mostly a "build shop," meaning it developed its AI and machine learning systems in-house, Plumettaz said. However, the company has since pivoted to a hybrid approach and started partnering with third-party vendors, notably Google Cloud. Wayfair has also tested generative AI technology from OpenAI, even though the company has historically been a Google shop, Plumettaz said. "We see the longevity of these partnerships as a mechanism of saying, 'Hey, we can use that to inform product,'" she said. "We see ourselves pretty much with a lot of vendors, as we want to be a partner as you're building your product rather than a transactional relation of, 'I buy a service from you.'" Regarding generative AI, the retailer has integrated the technology into products such as Decorify, a generative AI design tool. It is also incorporating the technology internally and in some sales operations. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Together, they host the "Targeting AI" podcast series.

Duration:00:37:40

Ask host to enable sharing for playback control

Tech industry reaction to Biden’s AI executive order mixed

11/6/2023
The tech industry is dealing with the implications of an executive order on AI signed by President Joe Biden Oct. 30. The order aims to establish new standards for AI safety and security, while protecting the privacy of American citizens, promoting innovation and spurring development of responsible AI. "It's really looking at developing guidelines and best practices really across the whole field," said Katherine Hendrickson, a senior research lead at EpiSci, an AI military and aerospace software and hardware vendor, on the Targeting AI podcast from TechTarget News. While the order holds much promise for AI system developers, Hendrickson said its main value is its focus on research and the government partnering with research centers, while also appearing to fund a number of AI sectors. The order also shows how the federal government is promoting AI technology internally, said Forrester analyst Alla Valente. "From the language of this EO, what's clear is that the federal government is now being mandated to leverage AI, and then use that AI to improve how it does everything it does," she said. However, AI vendors in both the private and federal sectors should pay attention to the order, especially in the areas in which there is a call for standards in AI safety and security, Valente added. The executive order discusses the need for new standards to test AI, built on the National Institute of Standards and Technology's framework. "What the executive order is hoping to do is identify some of the risks as early as possible," Valente said. If that's accomplished, risk and security management practices can be embedded earlier in the development cycle of the AI lifecycle, she added. While the intent of the executive order is to create standards and safety guardrails around AI systems, the lack of actionable steps stood out to Gopi Polavarapu, chief solutions officer at Kore.ai. "From a vendor perspective, it's a welcome governance that's coming from the government, but at the end of the day, we need to know what those standards are, how that's going to be enforced," Polavarapu said. Kore.ai is a startup vendor of conversational AI tools for enterprises. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas.

Duration:00:48:58

Ask host to enable sharing for playback control

Venture capital firm helps launch early AI startups

10/23/2023
The success of an AI startup depends on not only the technology and the problem the startup seeks to solve within the market, but also the support it has from investors and venture capital firms. One venture capital (VC) firm that prides itself on working closely with the founders of startups is Glasswing Ventures. As an early VC firm, Glasswing is focused on investing in AI-enabled companies and what it calls "frontier tech" B2B companies, according to Kleida Martiro, a partner at the company. "We have built strong convictions around certain areas where AI could really revolutionize certain industries," Martiro said during a Targeting AI podcast discussion. Those convictions have led Glasswing to create a mission oriented toward "connecting and protecting" building AI startups. When in the connect part of the mission, the VC firm looks for startups developing smart data infrastructure and automation, and vertical applications. The protect part is centered around security, which includes data governance and cybersecurity. Glasswing focuses on seed and pre-seed financing of startups in earliest stages. It guides startups by connecting them to customers, talent and more fundraising. "We serve as a true partner, we get involved as much as the startup wants us to get involved and we step aside when they don't need our help," Martiro said. "We're very much founder-first. They're part of our extended family." Startups working with Glasswing need to demonstrate that their technology addresses critical need in the market. The startups also need to start with real talent. "When investing at such an early stage, it really comes down to the team," Martiro said. "Backing a team that can execute, has the vision, has the technical chops and the technical skills very much married with the business ... and backing good people who are hustlers is truly what makes it at this stage." Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Together, they host the Targeting AI podcast series.

Duration:00:42:00