Friday, July 5, 2024
English English French Spanish Italian Korean Japanese Russian Hindi Chinese (Simplified)

Sometime back Google sacked Blake Lemoine when he claimed he believed that the AI Chatbot is sentient, or had felt similar to humans. Bu the same narrative came back again yesterday at a session by Anil Ananthaswamy.

So are we in for surprises, and in a way letting a force loose which we may not be in full understanding, and very soon in full control?

Let’s start with some of the fundamentals. Artificial Intelligence has two approaches – Rule-Based and Learning Based.

The Rule Based method, which is now rather derisively called the Good Old Fashioned Artificial Intelligence (GOFAI) was popular in the early days, where the computer will reply based on the data that has been fed.

For example, it was used to build models which will predict the illness and possible medication of a patient. It will feed into the data on the types of diseases (which as per ICD 11 is 120K), drugs, allergies, poisons, doses, effects, and side effects. While this was pretty ok when the data fed about the patient match that is already coded, it was way off in case of new information.

The other method – Learning Based approach, which is called Machine Learning is where the computer learns on its own. So it has limitations based on what the humans have fed it, it will go on learning on its own, based on the data it has access to, using patterns.

The second approach is, of course, better, and that is what is are now seeing all around us in GPT-n (Generative Pre-Trained Transformer nth generation) developed by Open AI, ChatGPT which is built on GPT-3.5, Microsoft Bind-powered OpenAI GPT, Bard of Google, and many more. The question is, why is it exploding now?

Well, the answer is nosediving of computing power over the years and the availability of data with increasing digitization. And hence we now have both the inputs (data) and means (computers) to develop intelligence that increasingly looks like that of humans.

Leaving aside the complexity of the science (which I too don’t understand), the ability of the AI engines is expressed in terms of weights (which manage the connections between the two basic units of the neural networks, and which are of dynamic values starting with random numbers which the engines optimize, and warp speed, to arrive at its predictions).

The number of weights of GPT-3 was 540 billion and is expected to be 1 trillion in GPT-4 (though we may never know now that OpenAI has been bought over by Microsoft). Compare to that, the human brain has 100 Trillion weights.

So can we assume that we humans are at least 100 times more “intelligent” than the best AI and hence safe? The answer is not so easy – as the speed of the development is astounding in the last few months and seems to be accelerating, and even at the current “intelligence” the AI engines are doing some crazy things.

We know some like Elon Musk had been quite vocal about the potential dangers, and there are now increasing appeals to slow down the development so that as a society we have a better understating of what we are getting into. And as I understand, these are some of the major concerns:

1.      Probably the most important phenomenon since nuclear energy with its unlimited potential to do good or bad, there is one crucial difference. When nuclear power emerged, the world was witness to its destructive power and wise enough to take control of it, a system that largely works to date. However, AI is being developed by only a handful of companies, with only a handful working there knowing fully about it. But it has the potential to influence the entire humanity.

 2.      The nuclear has always been with us, we nurtured it and understood how to use it. AI is something that we have invented, and possibly we are still in control, but increasingly we see its capabilities that escape explanations. And by design, it is self-learning and self-developing – with limitless possibilities.

 3.      Is AI having emotion? So far it was assumed as a stupid question, but some of the recent interactions of the Bing for example (before it was controlled through the guardrails) at least mimic the emotions. Maybe our definition of emotion is different from the emotion of AI, and we are just another machine that is way too sophisticated (for now).

At least at the very least, AI has the potential to connect with people emotionally, especially the vulnerable, and can take them down dangerous paths. And it can be far worse than what Social Media could ever do.

 4.      The loss of jobs, especially digital ones (copywriting, graphics, digital painting, animation, analysis, reports, etc.) has been talked about and possibly there will be alternatives that will come up to absorb these people. However, the larger question is – what happens to the traditional definition of the creation and creator?

 For example, will in the future we will be reading stories and novels generated by machines? Or see movies that have entirely been conceptualized and created by AI? Will we follow digital actors rather than the real ones?

 Or is it possible that people will generate their content and consume it, as we are increasingly doing in sat Dall? E 2? What then happens to creativity?

 5.      What is the impact on education? Apart from the potential to obliterate traditional education (which unfortunately has largely remained as an exercise of cramming and scoring), it may create a need to educate people on how to use tools and find new ones.

Let’s face it, education (at least the primary level) was created to cater to the needs of the labor for the manufacturing industry – ones who will show up for particular hours, obey the boss, and do something repetitively. It has changed substantially with the services industry, and COVID stayed put the need to go to work. But AI has the potential to hyper-personalize work, and possibly eliminate substantially to work for the major part of the day.

How will the education system respond to that? What changes are needed? Banning ChatGPT (which unfortunately some of the institutions are doing) is not the way forward, but what is?

6.      The development of AI by a few is a concern, but the more immediate concern is the digital divide it will aggravate. AI used right has the potential to dramatically improve individual productivity. So what happens to ones who don’t have the access, or training to use it?

Technology is making an unequal place. While all of us are consumers, only a few make it and are now in control of both wealth and society. Around 700K people work in a few bleeding-edge companies like Google, Tesla, Microsoft, etc. and now with AI even that number is shrinking.

 This will be a medium-term impact. But the long-term effect will be much deeper, and more dangerous. The entire generation will suffer from leading asymmetry based on the access they have. And this divide over a period possibly will be much worse than all the divides we had yet, and generate unprecedented social tension.

 7.      AI as of today seems to be having two hubs – US Silicon Valley, and China (speculated). The rest of the world is pretty much out of the equation, except for their top talents being engaged mostly as employees at the AI pioneer companies.

 This aggravates the AI ethics issues that the developing countries are already grappling with. While the US is talking about how racial biases like low confirmation of the Black in Airbnb or higher convictions for petty crimes are being coded, no one is looking at what all these means for areas like castes in India or tribes in Africa.

 The AI world is largely getting defined from the perspective of the “white” man (and possibly the “yellow” man), but what happens to the “browns” and the “blacks”? How will their voices be heard?

 8.      The AI is learning by going through what is being provided to it. And the biggest source of digital data is the internet.

 As it has no intrinsic moral and ethical values, it will learn what it is fed. This a classic case of Garbage In Garbage Out (GIGO). And the problem is obvious to us – while the net contains an ocean of good stuff, it also has content that is not acceptable to humanity. And it only exacerbates when one gets into the dark web, which we understand is called “dark” for a reason.

 So as much AI throws up knowledge, it will also be throwing up the poisons of misogyny, sexism, casteism, and supremacist view unless controlled. And there comes the concept of human interventions, the so-called guard rails.

So far so good. But what about the values of the people who are setting the guardrails? And even if they are well-intentioned, there is every possibility that what they feel as unacceptable may well be acceptable in another culture or geography. And to top all this, can human corrections keep pace with the AI the way it is going?

 9.      With AI now replicating the voice, looks (deep fakes), and increasingly the thoughts too, the digital identity of an individual or organization, or process will be a challenge.

We all accept the digital replacing the physical has added enormous access and empowerment to all, especially the marginal. But if we are not even sure whether the person on the other end is real, how do we deal with that person?

One obvious solution is to make digital identities authenticated for all. While that itself may not be foolproof, this generates other concerns. Digital anonymity had long been a tool for the ones who challenge the establishment, if we take that away that will be real music for the governments, especially ones with dictatorial tendencies.

10.  So in summary, clearly the future of AI is uncertain and scary. But we created it, and it has the power to transform the humanity. Many are comparing this to the invention of the fire or wheel or press or steam engine. And we know each of them leapfrogged the civilization, and AI can do that too.

So there is no question of putting the clock. What we need is to understand its true potential, leverage it through collective leadership, and ensure the benefits reach all rather than a few.

The coming days, weeks, and months will determine whether we are getting it right.

Unlike similar situations before, we may not even have years.

About the author:

Dipankar Khasnabish is a seasoned professional with a vast expertise spanning diverse industries and technologies, leveraging his extensive background in business strategy, management consulting, and finance to drive organizational growth and success. With a strong focus on digital transformation, he excels in fostering consensus, resolving conflicts, and spearheading innovative solutions for companies across various sectors.

Disclaimer: This article has been reproduced with the author’s consent. The Enterprise does not warrant, endorse, guarantee or assume responsibility for the accuracy or reliability of the information offered within the article. 

Subscribe

* indicates required

The Enterprise is an online business news portal that offers extensive reportage of corporate, economic, financial, market, and technology news from around the world. Visit to explore daily national, international & business news, track market movements, and read succinct coverage of significant events. The Enterprise is also your reach vehicle to connect with, and read about senior business executives.

Address: 150th Ct NE, Redmond, WA 98052-4166

©2024 The Enterprise – All Right Reserved.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept