As new platforms and applications of artificial intelligence (AI) become available, businesses must adapt to both harness their powerful capabilities and keep pace with competitors. From marketing to sales to customer experience, business users can leverage highly personalized messaging when communicating with both existing and potential customers. But it’s not without its concerns — increased misinformation campaigns, inaccurate data, biases, and cybersecurity risks have forced the business community to give AI a second, harder look.
“AI for Business Optimization,” a virtual panel discussion hosted by the Center of Excellence in Wireless and Information Technology (CEWIT) at Stony Brook University, had a diverse group of experts survey the accelerating AI landscape. Christine Gilbert, assistant professor in the School of Marine and Atmospheric Science (SoMAS), the School of Communication and Journalism and the Alan Alda Center for Communicating Science, moderated the discussion.
Panelists were asked how they are deploying AI in their businesses. Jeehye Yun, CEO and founder of RedShred, a Baltimore-based text analytics company, described how her company is leveraging AI for functions like generating copy for press releases, white papers, and simple marketing applications. While it is very successful, Yun stressed that AI output still requires human input to edit and check for correctness.
“When AI fails it’s because of the type of technical material that we’re asking it to do,” said Yun. “For example, for something like aircraft maintenance, the information has to be correct. There’s no room for error. You have to follow these procedures to the letter.”
Pepe Valiente, strategy lead at IBM Software Support, offered a similar caveat.
“AI can fix problems, but its failures can be exponential,” he said. “Everything starts with data. Therefore, it’s important to have good data you can leverage and have the time to train the models as well as having the opportunity to test them. Before going live with anything you need to be sure that there will be no craziness around the models that you have. And you also need to have a governance layer to help you control what’s going on.”
Cole Ingraham, lead data science instructor, NYC Data Science Academy, said that for all its vast capabilities, it’s critical that users know exactly what they want to get out of AI.
“Some specific jobs are more ready for AI training than others,” he said. “You have to get over the assumption that AI can do ‘anything’ because it can’t. You have to know how to approach it. Someone might use it to try to grow revenue, but technical people like software engineers and data scientists might try to understand how these things fundamentally work — how to improve them, how to keep an eye on them.”
Benjamin Powers, technology reporter for The Messenger, addressed the challenges of data bias and misinformation from a journalistic perspective.
“A big reason this is happening is because it’s readily available to everybody right now for however they want to use it,” he said. “So it’s not just one or two companies doing this, there are models that are proliferating really, really fast and they can be trained on data that’s not as carefully constructed as companies that have been working on this a long time, and that presents a bunch of different challenges.”
Powers added that companies including Apple and Samsung have banned their employees from using generative AI tools, because they can inadvertently compromise sensitive data.
“Those models can ingest information and learn off it and then potentially regurgitate it to somebody else completely unrelated to the company itself,” he said. “So you need to be very aware of what you’re putting in and exactly who is going to have access to these models and the data that is used. And this is going to lead to other issues related to privacy.”
All panelists agreed that for all the breathtaking advancement AI can potentially help enable, it’s critical for users to understand and respect its limitations.
“AI is not really understanding what it’s reading,” said Yun. “It doesn’t understand what it’s generating the way a person would. It’s a ‘next-word’ predictor. These are built upon layers and layers of models trying to figure out what’s going to come next but there’s no super intelligence behind it at this point. So is it really comprehending and understanding what it’s doing? The answer to that is no, because it’s been trained on material that has been written by humans.”
“This is not only a technical question, it’s also a human question,” added Valiente. “How can we create something good and regulate it? How do we align all that is going on around us so we can analyze what core values we have and care about, and then build models that can replicate that? At the end of the day this is something that is going to happen some way or another, but we need to find a way to humanize this to go back to the core of what we can do with all this power.”
“When we’re thinking about AI in terms of the risks and the benefits, it’s kind of a funhouse mirror reflection of ourselves,” said Powers. “It’s smart at certain tasks that it can do, but it’s not sentient. People might believe otherwise, but AI is really just reflecting the world that we’ve shown it back at us. And that comes with both benefits and also a lot of problems because we’re humans. We’re not infallible.”
— Robert Emproto