Anyone who currently asks at larger events or in well-attended lecture panels about who is already using AI in their company to automate or partially automate processes is in for a surprise: roughly speaking, the figure is just 10 percent. In other words, a whole lot less than published in current studies. Hard to grasp for a topic that has affected practically everyone for a year and a half. A larger proportion of respondents are at least thinking about AI, but there are a significant number of companies that are not even doing that. What is the problem? Lack of knowledge? Fear? Fear is known to be a bad advisor. And fear of AI can quickly turn into fear, or rather fact, of falling behind without AI.
Have you already lost track of AI?
When it comes to the if and how of AI, two approaches – or rather non-approaches – currently predominate: One concerns medium-sized companies that are generally receptive to innovations, but where just 2 out of 300 employees work in internal IT. Unfortunately, they are so busy that they really don't have time to think about AI. Not good.
The other concerns medium-sized companies that have significantly more IT manpower and therefore time to experiment with AI. They then do something with these Large Language Models (LLMs) and certainly have fun doing it. Unfortunately, no one has an overview of which LLMs they are playing around with and which company data is being leaked in the process. And where does it go anyway? Not good either.
Who leads through the jungle?
Frankly speaking: Yes, Generative AI is like a jungle. And it's really not easy to find your way through it. It's all still too new and it's moving pretty fast. Since the launch of ChatGPT at the end of November 2022, it feels like thousands of LLMs have been added. There are new models almost every day, names appear and disappear again – who has time to keep track of and understand this? And what exactly are tokens, what do they cost, what happens to the data? What does the EU AI Act mean for your own business? And – by the way – what does the works council say about the use of AI? They immediately fear that employees will be laid off. Oh yes, and then there's the FDA and ISO and other regulations... HELP! The result: The effort involved in dealing with AI seems to be significantly greater than the benefits of AI. Not true, by the way.
Fear of losing control and unforeseen consequences
But what return on investment can really be achieved through process automation with AI? How can this be calculated? AI-based virtual assistants are seen as a calling card to the outside world, but what do companies do if the bot talks nonsense, hallucinates, reveals personal data, leaks internal business figures or compiles a nice list of competitors on request, sorted by sales figures? Who can you sue then? Should you just not care if you are using AI, no risk no fun? Or do you simply ignore the topic of AI completely?
Both are wrong, of course. In all seriousness, AI is actually available to everyone and is generally easy to use – you just need someone to help you through the jungle and point the way in the right direction. So-called AI gateways, which will become increasingly important for the use of AI, especially in European companies, take on the function of “That's the way!”
That's the way!
These gateways help you select the right LLM for your own use case. They provide support with all questions relating to compliance, i.e. adherence to legal regulations regarding data security, data protection, data storage, data transfer and so on. This is all becoming more rather than less. Cost control and the question of what is actually paid for what is also crucial for the successful use of AI. Here, too, AI gateways provide a clear view. And finally, there are the guardrails, which define what the AI, i.e. your own bot, is actually allowed to do and what it is not allowed to do. Sounds like a lot of work, but with an AI gateway from VIER, it's easier than you might think.