The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> And there you go. That's what it gave me.
With the effort ‑‑ (speaking language other than English).
>> So that is what ChatGPT is capable of. That is what large language models are capable of. And this is just the tip of the iceberg, because now we have not just people ‑‑ not 4, but 4.5.
So that ‑‑ that's language models.
Generative AI models, large language models happen to be the ones that ChatGPT uses because it's for words and verbal communication.
AI itself, it's all machine learning. And that is basically the Foundation of most artificial intelligence applications. It's not the only one but it's the most common one. It's ‑‑ (audio fading in and out).
Let's talk about machine learning for a moment. How does it work? Machine learning, we'll take the classic example of what machine learning does. Classification, basic function that machine learning, algorithm can do. Classification, let's say we want the machine learning model to classify a picture of a dog when it sees one. Right?
Sees one. When we introduce it to it. So when we do, we give it the training data set. As many pictures of dogs as we possibly can. Right?
As many as we can.
Training data. Now...
Statistical model. There are many statistical models that are used for predictive analysis. So the most common are linear regression, you might have heard of decision trees. These ones essentially what they do is basically this.
They examine each picture. They look for patterns in the pictures. And through the pattern with a set of rules. And once it has ‑‑ (audio fading in and out) you know, they can judge by the result. So we present to it the picture. Another picture that we haven't seen in the past of a dog. And if the rules are correct, it will identify that this is indeed a dog. And then another one.
It's ‑‑ doesn't conform to my rules. That is not a dog.
A third picture, which happens to be a dog. With the training data and it gets confused. So what do we do? Take that picture, feed it back to the model as part of a training data set, and it keeps learning. So ‑‑ supervised learning. But supervised learning we would ‑‑ this kind of reinforcement and also includes what we call feature engineering. As we were giving the pictures of the dogs, we might have ‑‑ this is what a nose of a dog looks like. We call it feature engineering.
So that's the essential mechanism through which machine learning models work.
Now, let's reject that on what's happening in the generative AI world, in the large language model. How is that different? You have the training data.
The training data, what do you think is the training data for ChatGPT?
It's text; right?
What kind of text is it?
Some text that some company gave to ChatGPT? It's on the Web.
You're absolutely right. It's not just on the Web, it's the entire worldwide web, 500 billion words. And you might think how did it ‑‑ (audio fading in and out.) You can download the entire worldwide web. 500 billion words. You run it through that statistical model, and here's where it gets different. Remember that GPT part, generative transformer architecture. And that transformer architecture is really good at identifying context.
If you really want to think about it, it is like an autocomplete on steroids. That's what it is. When you're typing on your iPhone message, and it kind of goes ‑‑ like I always tell my wife, I'm going to be a ‑‑ (?) How does it know that?
It just found out that I said it so often, and it's ‑‑ complete. That's exactly what the transformer does. So gets the context without us having to label for it the actual feature engineering that I was referring to in the classic machine learning.
So this ultra complete is what allows it then to summarize the story of Cinderella so quickly. Because it's learned the pattern and context. The training, gets to be the entire worldwide web. So we get this summary that we're looking at. So that's as far as the large language models and generative AI is concerned..
So undoubtedly generative AI is a revolutionary milestone in the world of artificial intelligence. But we need to remember that artificial intelligence is more than generative AI, and it has been delivered. Let's remind ourselves of what it has been delivering for. Some of the things we take for granted in face I.D. on your iPhone. That's computer vision which is a form of machine learning.
If you think on top of that what it does in identifying our friends and family members from our photo album, et cetera, that is also machine learning. Computer vision. Right?
Now, take something again that never probably crosses our mind in terms of we watch a football game or sports. And we can see what is going on in the screen. It's identified that there's a flare moving there, the shots going through, all of that is happening in real time. And that is artificial intelligence. Not just that, gloves are using artificial intelligence, have been using artificial intelligence to actually inform them of the right information.
They study the opponent beforehand, and it influences the coach of what kind of formation they need to put on the field, et cetera.
Liverpool, you know, the league ‑‑ and they have a reputation for having one of the strongest AI teams in the Premier League. They're really leveraging that, that doesn't choose the player, so they don't often get the best stars; right? They're getting the right players to allow Liverpool to win. But they're not necessarily paying the biggest bucks for the biggest star. So that's another implementation of that AI.
Something else that's been happening and it's happening to us right now, social media apps, your Facebook, Instagram, TikToks. And that's all happening in the background. It's looking at our behavior and basically analyzing it in order for it either to suggest to us what we should look at, we should follow.
Actually delivers to us an app. That it predicts it's going to be of interest, based on our individual behavior. Right?
So that's been going on for at least the past decade, hasn't it? And we're all ‑‑ by social media in many different ways. That's artificial is machine learning.
Now we move on to the enterprise role in the world of business. Right? One NFT industry that has been benefiting from AI has been the insurance industry. So it typically ‑‑ you make an insurance claim, and it either gets approved or rejected based on the kind of damage, the analysis of the accident, et cetera. Assessing the actual cost of the repair, that used to be done by humans.
Today it's supervised by humans, but essentially a lot of the effort that goes into this analysis is cut to artificial intelligence. So you run the pictures through an AI model, that large learning model, and it's able to tell ‑‑ to give you a recommendation to do the claim ‑‑ (?) Or not.
Now, I come from a telecom background. We used to use artificial intelligence in our network planning. When you're handling routinely customers as my company was, you're basically looking at a very fluid environment of usage. Right?
So you have certain times of the year where there's going to be a lot of demand in a particular area and less demand elsewhere, and you need to be in that dynamic position to predict the usage. We use AI to tell us by looking at thousands ‑‑ bites of data basically to allow us to predict where the demand is going to come from, where we're going to have the shortage, et cetera. And we use it for understanding the customer behavior if certain customers were likely to leave our network to the favor of a competitor; then we'd be able to ‑‑ would be able to give us early warning signals that we would save that customer by giving them some compelling offer.
Another machine learning application, predictive maintenance, the manufacturing industry, you don't want to wait until a piece of equipment ‑‑ it stops your production line, to anticipate that early enough. And machine learning has been helping manufacturers do that: Preemptive maintenance.
Right. So as we're talking about AI for enterprises, what does it take to enable machine learning in your enterprise is the question.
The first element is data. No data, no AI. Sounds like a song from Bob Marley; right? So no data, no AI. But it's data, and actually it's so much data.
So I was just telling you about the example from telecom, we literally were possessing data bites of data on a daily basis. So the more data you have, the more opportunities you have for your machine learning models to learn. Right?
You take the example of, you know, the worldwide web, or we talk about the dogs. So the more data you put ‑‑ (?) It's going to be.
So the first element is data. The other element is the gate. Right?
A key component of any machine learning environment, an AI environment is a data scientist. Right?
So when we talked about those complicated models, you would ‑‑ like decision tree, the progression analysis, et cetera, you need somebody who knows how to programme. So they need to actually have a combination of two skills. They need to be someone who understands statistics, and also someone who's good at programming, typically python or R. You have got that combination of a skill set of a data scientist; and you know, there's been a race to hire those people. I can tell you, we were hiring them in my company, and they would stay with us for a year, and they were off to, you know, double their salary or something. It's a very competitive market.
And that's what makes it difficult for companies to grow that AI capability within the organisation. Impediments.
And what else do you need?
Well, if you want to process the data bites of data, you need a huge data centre. You need a lot of storage, you need a lot of computing. So that is something that actually is now not absolutely necessary to own because we have cloud. Right? And I think that's a great thing about the fact that we have cloud. So cloud saves us to invest in huge data centres, and especially that you don't need to ‑‑ you don't need all that capacity on an ongoing basis. You need it when you run a model at a particular point in time.
So if you get that elasticity from a ‑‑ then you just use it when you need it, and you're not paying for a full‑scale data centre in that manner.
So if we were to summarize the components, so you've got the computing of the algorithms ‑‑ they're pretty much something that are accessible to any organisation today. Why? Because the likes of AWS or Google, they will provided those to you, and you can pay as you go. So there's no real obstacle there. The obstacle might be if you do it a lot, the bill might be a little high. But at least it is accessible.
So the models, they exist on the platforms like AWS and Google and Microsoft and the computing likewise.
The challenge is here. Right? The data and the talent.
So that ‑‑ and I think that's what has held back organizations from progressing on AI over the past years. Hopefully those who have been able to capture the data and the talent are the ones that have been able to make the difference in the AI in the core of their business. Right?
So that ‑‑ that's as far as the classic AI, the machine learning, is concerned. But that paradigm is changing. Because generative AI is imposing a new paradigm. Right?
Specifically what is changing, AI is becoming everyone's business. It's become accessible to everyone. You don't need to invest in the data scientist and the data in order to actually have some generative AI capability. Right?
So think of this. How many of us are able in our day‑to‑day work to leverage AI to help us with our writing?
Show of hands, please. Okay.
That's the majority. Maybe PowerPoint. Less?
Yeah. Okay. That's great. So it is accessible to us because it's just so easy, you don't need to actually buy anything, you just, you know, pay pennies and sometimes even free tools.
And likewise for illustration. Creative work. These are some of the people who, you know, leveraged AI and maximized the use of it, whether it's artists, composers, et cetera. So that is something that's becoming accessible.
Now, of course develop ‑‑ software developers, systems, very much commonplace today. Many developers are leveraging AI to help them with that.
And finally, I think last but not least is learning. Right?
So ‑‑ and learning can start from instead of Googling, I'll just ask the likes of a ChatGPT. And it will give me an answer. Or it can actually be a learning like we have in Khan Academy, for example. If you're familiar with that. So there's an actual tutor that helps you with it. Generative AI is allowing it to become ubiquitous, at the personal level.
And so what's in that now ‑‑ necessary, the geek in every use of AI. Do we need the data scientist?
Examples that I mentioned, we don't ‑‑ they know this is sitting in the background in OpenAI and Microsoft, but on a day‑to‑day basis, we don't need them in our organisation. And so it's out with the geek and in with what?
In with natural language conversation.
You ask the AI, you know, please generate for me a PowerPoint presentation about bum bum bum bum bum. And very impressive tool I looked at recently, called Builder AI. So this is basically a piece of software that allows anybody to have a conversation with a chatbot verbally. I want to build a web page for a marketplace where ‑‑ and you give them a description of the marketplace that you have. Goes in the background, generates for you the website. It's that incredible.
So really we've kind of ‑‑ we're using natural language conversation. And that's what makes it so compelling.
And the list goes on and on and on. I mentioned Builder AI. But look at those hundreds of start‑ups that are coming into the space. Start‑ups every day. In fact, we have a statistic from Gartner. You know, the number of generative AI foundational models that are created, how often do you think we're seeing a new foundation model?
I'll give you some choices. Right? So...once a month?
New foundational, new generative AI model.
Once a month?
Once a week?
Once a week sounds reasonable. Yeah. Well, it's actually two and a half days. Every two and a half days a new foundation model is created. That is the race. There's a race for a land grab on AI specifically driven by generative AI.
Now, so here comes the question of this presentation.
Is your organisation ready?
Well, I'll give you another statistic. This is a survey also from Gartner. In the past few years, that's before 2023, we were typically asked our clients who are technology leaders, about what they think of AI. What do they think AI will significantly impact their industry?
This is a survey with CEOs. Okay? And so the question ‑‑ the most ‑‑ do you think AI will significantly impact your industry?
A lot of CEOs kind of felt that this was a bit distant from their business, from their industry. Like AI, what do I think of when I hear the word AI?
So it wasn't ‑‑ only 20% said they did. Only 20%. Until in 2023, that changed to 59% of CEOs believing it will make a difference in their industry. And last, this year, in 2024 this jumped up to 74%. 74% of CEOs that we have surveyed believe generative AI will have a profound impact on their industry. Right?
Now, what this tells us is that there is certainly a big appetite for AI as far as leadership is concerned. So we work with a lot of clients, and we're seeing that pressure in ‑‑ with the technology leaders that we work with.
Now, they are asked to do something with AI. There's a fear of missing out. There's something we need to do here. How can we just watch here and miss the boat? So that's a reality.
Also, if you look at Gartner's hype cycle, is basically a reflection of the different emerging technologies and looking at their states of adoption and maturity. So generative AI is at the peak of inflated expectations and now it's kind of normalizing. (Audio improved on Zoom) It's kind of normalizing now. But generally what you're seeing there is there is a wide adoption of generative AI. So when I ask the question, is your organisation ready for AI, I think the simple answer is organizations have expectation from AI. That is a fact. So that's good news. There's this eagerness, this hunger for AI. Now comes the question.
Is your data ready for AI?
Now, the data discussion on AI is a bit nuanced, because you know we talked about machine learning and we talked about generative AI, but they're not exactly the same animal. Let's have a look at that. So typically this is what a data and analytics landscape would look like in terms of its components.
So you've got different data sources, operational systems, mobile applications, websites, et cetera. And then you've got some infrastructure there related to analytics, whether you've got a data warehouse, a data lake, or smart. And then you've got integration mechanisms like data streaming, batches, ETLs. And you've got then data governance, which is basically more of a management activity.
And then you've got virtualization layers, and then you've got the actual presentation and analysis layers related to data science, machine learning. You've got business intelligence, which has been the mainstay in the past decades.
And then you can actually build on top of that some external services. So that's kind of the overall ecosystem, if you will. If we simplify it a little bit and think of a data warehouse. Because this is really where this all originated. A data warehouse, basically it tries to capture all the data that you have in your organisation and centralize it into a central repository that can then serve the organisation in terms of insights. The insights don't necessarily have to be AI, they could be just analysis, you know, through power BI report, for example.
So typically what you have there is a ‑‑ what we call an ETL, extract, transform, and load transaction. So you're trying to collect the data for all those different operational databases, and put them in a staging environment, structure them in a way. The key word here is "structure," so we really ‑‑ the big effort was structuring the data, preparing the data for consumability. Right?
So we had to do that through the transform and load. And then we put it into the data warehouse. Once it's in the data warehouse, let's build a little data mart for our marketing guys, finance guys, another one for our operations guys. So they can actually consume the data from reports, through things like power BI, et cetera. That is the classic way of going about ‑‑ at your data and analytics environment.
The key words there were two. There's data, there's structured data. All of that is based on structured data, and there's centralized data. Right? We're trying to centralize the data again, and we're trying to structure it. And we've got centralized technology.
Now, when you think of generative AI, it, like I said, creates a new paradigm. You don't have to have structured data. You don't have to have centralized databases. Or even centralized technology.
So that is changing. And let's have a look at what that means.
If you think of the use cases, like we've been asking our clients, you know, using generative AI, where has it delivered for them?
And in most cases, like you've got 21% saying ‑‑ 21% saying in software development it's been most effective. Right? 19% saying in call centre and in help desk. It's been very effective. And 19% in marketing content creation. And HR self service, 4%. They're changing by the day. But these ones have proven themselves more than others. Let's think about those use cases. Take a moment to kind of zoom in on each one of them and look what it means in terms of data.
If you think of a call centre agent, they take the call, and very much like what's happening with me now, the call is being transcribed in realtime. Right? So the generative AI is ‑‑ is playing in the background. It's listening to the agent, listening to the customer, and it starts interpreting what's going on.
And through the ‑‑ the intelligence that it has, through the access it has to corporate policies, our customer care portfolio, it's recommending to the agent what they need to do, what to advise the customer on the call, right?
So that ‑‑ that ‑‑ not just that, after the call is over, it's able to assess the agent and actually, you know, do the work of what a supervisor would typically do in a back office.
So that is a compelling use case, and it's working very well. We haven't yet reached the stage where we're saying we're replacing the customer service agent. It's probably going to happen, you know. Maybe two years from now; five? I don't know. But it's probably going to happen. Because at the rate of acceleration that we are seeing with the maturity of the technology, it will be good enough. Right now it's about assisting a customer service agent.
But when you think of what that means in terms of data, what ‑‑ what key data item have we used there? Data asset have we used there? It's an audio file. It's not even an audio file, it's live audio. Right?
And perhaps also combined with our policies and our regulations and ‑‑ and service portfolio. And again, that is something that's probably in a PDF document or something. Another use case that's quite common, AI for resumé screening. That's being extensively used by the HR folks. And that's ‑‑ basically the data asset there is email. Right?
So that's unstructured data.
Think of an advisor on legal. That's another use case that's also picking up. So use AI to advise you on legal by lawyers, basically. Likewise for HR also, when it comes to your HR policy. So what are we looking at here? We're looking at a pdf repository. And a pdf repository is also a form of unstructured data. It's not something you can put into a database, right? And if you think of programming and software development, the data source there is a ‑‑ is a GIT repository, a code repository. As you can see, the theme we're building here is that the data is very much unstructured.
And when you think of unstructured data, you need to think of a messy room; right?
So imagine yourself walking into a messy room, and there's data everywhere. Right? I mean, there's data ‑‑ we can't even see it. You know?
The general ‑‑ the beauty of generative AI, before generative AI, if you think of this analogy, we would have to clean up every inch of that room in order for us to use the data.
But with generative AI, you don't need to clean it anymore. You just leave it up to generative AI. And able to pick up the data lying on the floor, the data on the sofa, the data in the pot.
And even the data that we're not seeing. It will figure out that there is a pair of running shoes under that cupboard, they're size 9, and their color is pink. Right? So it's actually identifying the data that you're seeing. You can think that's amazing, I don't need to structure my data anymore. I don't need to do the housekeeping, I can be lazy. To some extent but not quite. Why? Because first of all, it's expensive. If you're going to fully rely on generative AI to do the housekeeping, it's an expensive housekeeper. But there's another big reason why.
Because of the risk. Right?
So think of who you are going to let into your room.
Who are you going to allow to touch your stuff?
Right?
So access rights is an extremely important part. You let it in, you're basically ‑‑ to vacuum everything you can. It will label everything, it will capture all the data. And that might not go well for you. Think of your corporate presentations. Your payroll. Your organisation charts. Et cetera. So all of that you need to be careful.
Don't want to leave the door open without control.
So access rights basically what we're saying is get the data structured data. Your data will not be ready for AI until you do that.
The data access rights for unstructured data:
The other risk that we need to manage is data interpretation. Now, we all ‑‑ we've all heard about AI hallucination? Yes? AI hallucination, when it interprets things incorrectly.
So large language models can sometimes get it wrong. Sometimes they can get it dramatically wrong.
I'll give you a simple example. It might not be a large language example model but how AI can be wrong. You see these pictures. These are pictures of what?
Bagels. Right? But within the bagels, what else do we have there?
We have dogs. Right?
AI doesn't see that. You know, that's the thing ‑‑ it classified them all as being bagels. Another one.
Muffins. Right?
You see the dogs there?
Okay. I'm sure you do, because you are a human. Right? Because AI builds up from the details, humans fill in the missing details with their experience. So we need to be careful with what the AI gives ‑‑ of misinterpretation. And if we rely on it blindly, then we can really go astray.
The second aspect of data, the risk of readiness is that we need to guide the model. Our context. And that is basically two things. Semantics and fine‑tuning. Right? Semantics is basically where you tell it what does revenue mean?
So remember, if we talk about ChatGPT has the knowledge from the world. So it knows what revenue means in general. It doesn't know what revenue means for my organisation. Right?
A good example, a client of mine, they were ‑‑ they provide citizen services. But they provide citizen services within a specific jurisdiction. Right?
And they were trialing this generative AI chat box with the citizens, where basically the citizen would come in, ask for the service, but they weren't entitled for it. Right? So the AI had to know that this person doesn't live within that jurisdiction of services, and it had to tell them, I'm sorry, you're not a resident of this particular county.
Now, it didn't do that. It was actually offering them a service. And that's had a problem because the semantics weren't actually done in the way that told them what it means to ‑‑ by a citizen. A citizen of this particular service.
So that's semantics. You need to work on your data dictionary, and fine tune the model. We talked about generative AI not needing the supervised learning. Well, I wasn't 100% accurate when I said that. Generally it doesn't, but then you ‑‑ some ‑‑ when you want it to be useful for a particular use case, you need to fine tune the model. Right? So that's the other aspect of AI in this, is semantics and fine tuning.
So you know, when I talked about the housekeeper and we can be lazy, all right, I was only joking.
Data management actually continues to be a necessary practice for taming the generative AI. That's absolutely necessary. In fact, it's even more important today. But perhaps we're focussing on some of the less laborious efforts of structuring the data and more on the contextual efforts in what the data means.
So we said, you know, there's a new AI paradigm for enterprises. The data is unstructured. The data is no longer centralized. And the other thing is that applications are no longer centralized. Right?
So think of today, Gartner estimates that, you know, application providers, your software companies, only 5% of them today have embedded AI in their software. Only 5%.
Now, in 2026 we believe 80% of all software providers will have a form of embedded AI. Now, 2026 is just around the corner; right? So that's going to happen very soon.
Meaning that the AI you will leverage and utilize is not just the AI that you built, it's actually the AI that will come to you with your software. Right?
And again, that's good news, but it could be bad news. Remember when we talked about who are you going to let into your messy room?
So actually, this is the fact from Gartner, the Magic Quadrant and this is one of the recent Magic Quadrants that we have that we built specifically for the generative AI emerging markets for knowledge management applications. As you can see there, you know, look at the number of players there. A lot of them would be familiar to you. All in a race to add AI features and functionality into their software. Right?
And this Magic Quadrant we update every quarter. Typically we update Magic Quadrants every year. For this one we update every quarter because the pace is phenomenal. It changes from quarter to quarter. So that's what's happening. It's a reality you're going to get embedded AI, not just the AI that you build. And then there's another phenomenon which is even more dangerous. It's what we call bring your own AI.
Right?
Remember bring your own device?
Now it's bring your own AI. Because you know what? You've got your HR folks that say we have this nice dual ‑‑ our colleagues in company X are using it, and it's fantastic. It makes our life so much easier, you don't have to read all the c.v.s et cetera. You got your marketing folks who are creating artifacts and they never ask for permission. There's this phenomenon of bring your own AI that is being progressively introduced to our organisation.
And so if we look at the landscape, the evolution of the AI tech stack, this is a classic ‑‑ the classic AI tech stack. Remember I was showing you the diagram of the house ‑‑ this is what we used to have. All data was centralized and structured. You've got an AI platform that you build. Right? You've got your built AI. And then you serve different functions in your organisation. Right?
How is that changing?
First thing, the data is centralized, we have some data centralized, like you know, we talked about our policies, our customer records, et cetera. Yeah, that's cool. We have it, it's centralized. But now the data is coming from everywhere and every client.
We talked about bring our own AI. Talked about the embedded AI. And ‑‑ and you're going to have your AI platform, you're going to build ‑‑ thing ‑‑ a lot of blended AI at the moment. Meaning that, for example, you can learn from OpenAI or from Microsoft. And you know, leverage them within an application of yours. In order to ‑‑ not to reinvent the wheel? Right.
You've got the blended AI, and on the top you have embedded AI where you have know control whatsoever, it's embedded in the software, and you've got your bring your own AI efforts that are completely wild and out of control.
And in order for us to make sure it doesn't get wild and out of control, here comes this middle layer.
The trust, risk, and security management.
So we alluded to that when we were talking about semantics, when we were talking about access rights. So that's extremely important. But it's a conceptual layer there that every organisation will need to build in order to mitigate the risk of generative AI. And then in the middle on top of that you've going to have to have some governance, some actual, you know, committees. So you're going to have a central AI committee that looks at what are we going to allow in the organisation, what can ‑‑ and what can we not permit?
You want to have communities of practice where, you know, people are exchanging knowledge and experiences about their AI.
And you're going to have the trust, risk, security, and oversight. This, my friends, is what Gartner calls the technology sandwich. All right?
This is our technology, AI technology sandwich that basically describes how the AI landscape is evolving. And in fact, it's a paradigm shift in how it has existed in the past year.
And so we invite every company, every organisation to really understand what the sandwich means for their organisation. And look at, you know, what do they need to introduce? And it's very much a learning curve. So I don't think any organisation we've seen has actually figured it out. This is a conceptual framework, and we need to make sure that we're learning how to apply it.
And so let me conclude.
First point, you know, I need to emphasize that we are at the cusp of an AI revolution. And it's triggered by ‑‑ it was started by ChatGPT, but it's not going to end there.
The other takeout is that at the individual level, you're already feeling the impact. It's making us much more productive. Each of us is using it in different ways. And suddenly I see emails that are so proficient that maybe one year ago were they different. So that is a reality.
For enterprises it will take longer than individuals because of the risks and the challenges of actually safely introducing AI. And in order to introduce AI safely, you need management practices. That's a key input. Which basically two of them: Access rights and fine tuning, and semantics.
And for IT leaders, technology leaders, you need to be prepared that you could ‑‑ you will not have everything centralized and fully under control. You will have to accept there will be an ecosystem around you, but you need to put the guardrails around you, and not own every piece of AI in your organisation. And that basically means that you need to prepare and customize your own technology sandwich.
Bon Appetit.
Thank you very much. Thank you for your time. Please take the time to fill the survey, what you think of this session. If you are ‑‑ the QR code will take you to a landing page.
I have a question. Please.
(Off mic.)
>> ALAA ZAHER: I'll summarize. So Amal was asking, when it comes to the technology sandwich, what experiences have we seen in terms of fulfilling it successfully; right?
It's a tricky question, because like I said, technology sandwich is a concept we came up with a month ago. But if you break it down into its components, what we're seeing are organizations that are fulfilling bits and parts. Bits and pieces of it. So we're seeing organizations that are actually introducing very strong security management practices.
We're seeing organizations that have ‑‑ committees for governing. We're seeing organizations that are ‑‑ that are introducing data management and really harnessing ‑‑ trying things out. So you know, I was telling you about this example of the organisation that was serving its citizens with this pilot chatbot. Interestingly enough, another instance where it went wrong was when ‑‑ when it was ‑‑ when somebody said, I'm unhappy with the service ‑‑ right ‑‑ and the chatbot was responding to them, that the generative AI model, it said, okay, if you're unhappy, you can escalate to the office of the minister. Right?
So this is ‑‑ you would never get your call centre agent asking you to escalate to the office of minister. It should be proposing some solution. So what they learned on the back of that exercise is that they really need to double down on the semantics and the fine tuning. We see a lot of organizations that are now ‑‑ that are actually ‑‑ made those trials, and they're learning how to master the art of fine tuning. Because it's not easy. You need to look at all the consequences, all the possibilities and feedback, and feed the model back with the learning. So you know, it's an evolving landscape. And I think we're all in that journey to learn together about it.
Thank you very much for your question.
Any other questions?
Yes, please?
Can you pass the mic?
>> AUDIENCE MEMBER: So ‑‑ yeah. So it ‑‑ it's a nice innovation, and I believe the technology sandwich you showed ‑‑
>> ALAA ZAHER: Can you turn the volume up? Okay. Because that goes straight to the headset. Never mind, I'll just come closer. Everybody else can hear it, just me.
>> AUDIENCE MEMBER: Yeah, so the technology sandwich, how do you think these big AI labs like Google DeepMine and OpenAI, they have certain framework. Preparedness framework and DeepMine has the SAP framework. What would you like those labs to do in the line of your technology sandwich?
>> ALAA ZAHER: Thank you, that's an excellent question. The question is about the big tech giants, the people who actually produce the generative AI. Remember we said most of us will not create generative AI, we'll just leverage it. Right? We'll leverage it from Google, from Amazon, from OpenAI et cetera.
Now, in Gartner we also talk about two AI races. Right? There's the tech vender race, the Googles of the world. And there is end user. Right?
The tech vendor race is an accelerated race. As we saw in those embedded AI functionality. They're going full on, wanting to capture land and be first.
For us in our organizations, we can take our time and slow down. Right? And especially if our industry is not being disrupted by AI. So we're still kind of very much ‑‑ most organizations are in this improvement of productivity. Right? So there's no sense of urgency in terms of ‑‑ let me do this quickly. My advice, as an organisation, if you're not being disrupted then maybe you have the leverage to actually start installing those practices and looking at what the vendors are providing. And deciding safely what ‑‑ what matters to you.
Now, for them obviously they're going to push. I had customers where they deployed Microsoft co‑pilot, OpenAI on Azure, and they came back with huge bill shocks to start with. Because the cost of the tokens is incredible. We just need to slow down. We should not be following the venders, because they will try to sell us as much as they can, and the business case for generative AI is still very much under development. So what you spend is not necessarily going to give you an immediate return.
So we say there's a steady pace. For most organizations, it's the steady pace. For other organizations, it might be an accelerated pace, but then would have to be some; yeah?
I hope ‑‑ question. All right. Thank you very much.
We've got two more questions. And five minutes. I'll take this one first.
>> AUDIENCE MEMBER: From your experience with different ‑‑ from your experience with different customers, it is expected that to increase the generative AI with enterprise and a lot of entities will start to develop their own generative AI models to protect the data, or is it expected to dominate from the big guys.
>> ALAA ZAHER: Again, brilliant question.
So Mohammed is asking whether we should ‑‑ are we seeing organizations developing their own generative AI large language models? Not necessarily the large language models, but are we building it in‑house rather than using it directly from a provider?
For example, you can use open source large language models on hugging face, or et cetera. Or Lama, for example. So we're seeing organizations leverage those open source models.
Why? Because they want to host them internally. They don't want them to be on the cloud. But that then requires a lot of skill in terms of, you know, being able to leverage that model internally in‑house. So there's more effort there and less maintainability. You'll have to take care of like any open source piece of software. So you're going to own it. And so you're going to have to have the skill sets to maintain it in the future.
We're seeing that being a driver for many organizations that don't want to be exposed. So they get the large language model, it's hosted. And they need to invest in GPUs. That's another investment.
When I started talking about cloud, it takes away the hassle and your investment in infrastructure. You're going to have to invest in it if you're going to host it internally. Really I think it's a trade‑off. We're seeing some organisations, typically those that have good software engineering capability, they tend to go down that way too. They want to try things out for themselves. But many organizations that are typically dependent on third party and outsource, very difficult for them to do that. They go down the route of one third party.
One final question for you, please?
>> AUDIENCE MEMBER: Thank you very much for the presentation. Very nice.
>> ALAA ZAHER: I'll have to come forward.
>> AUDIENCE MEMBER: My name is Martina Legal‑Malakova. I am from GAIAxApp Slovakia. And I have a question now to your presentation, because on the slide that ‑‑ an NI landscape, you put data sharing as a social initiative. Why?
>> ALAA ZAHER: Yeah. Okay. Well, thank you for that. Your question is on the slide where we had the ecosystem of data and analytics, you said data sharing part of the social. Yes. So many organizations are looking to leverage some data assets and some data components that they have to the benefit of external parties.
>> AUDIENCE MEMBER: Yes. But I ask you, for example ‑‑ for example I am focussing on the ‑‑ but the most important space ‑‑
>> ALAA ZAHER: Sorry?
>> AUDIENCE MEMBER: The most important data sharing is for the, for example, manufacturing sector, energy sector, circular economy, and this is why I ask the question, why you put on the social initiative. It is really business initiative.
>> ALAA ZAHER: It could be a mix of business and social. I'll give you an example. When I worked for a telecoms company, like I said, we sat on vast amounts of data. And the data was about ‑‑ a big part of it was about consumer behavior. Right?
We knew where everybody lived. Where they go, who they call. And we created models to basically profile consumers. And that model could be interesting, you know, in the same way that the social media companies do. Like Facebook. They do targeted advertising. In that sense you could utilize it in a social.
>> AUDIENCE MEMBER: This is clear. This is clear. My question is why you don't put data sharing in manufacturing sector, energy sector, and business cases sector?
>> ALAA ZAHER: I'll tell you what, let's take this discussion off line. I'll come to you. It's going to be much easier, because our time is up. Thank you everyone. Thank you.