Humanity at the crossroads: Innovation, Ethics and the Future of AI

The issue is not about AI discovering crystals, or cells, or drugs. The issue is about who owns the breakthroughs achieved through AI

Harini Calamur Updated: Friday, December 29, 2023, 08:26 PM IST
A visually compelling image representing the dynamic interplay between humanity and artificial intelligence | DALL·E-generated

A visually compelling image representing the dynamic interplay between humanity and artificial intelligence | DALL·E-generated

It has been a year when artificial intelligence (AI) took centre stage in technology and policy discourse. Once the realm of science fiction books and films, AI is now very real. Conversations around the impact of AI on life, on work and on society are to be heard everywhere. The way it has ingrained itself in our decision-making process is a matter of both concern and interest, as we worry about the role of human beings in our planet’s future.

AI’s ascendancy has not only sparked groundbreaking innovations but also stirred complex debates around ethics, privacy, and the future of work. As AI continues to evolve at an unprecedented pace, it stands at the intersection of technological marvel and societal challenge, commanding global attention and demanding nuanced understanding and careful handling.

At the top of course is the issue of job losses. Last week PayTM cut 10,000 jobs. And it was not the only company. Search with the key phrase “job losses due to AI” and it makes for disconcerting reading. And a lot of it stems from the way AI is helping us at the workplace, and the way we use it. Since its launch last year, various generative AI systems have learned rapidly. The hallucinations — or made-up information — that AI used to give us earlier, has now been much reduced. It has become a handy tool at the workplace. Asana’s recent “State of AI at Work” report suggests that employees believe that 29% of their work is replaceable by AI.

But, while AI is great and a fantastic support at the workplace, It has gained what it knows from elsewhere. Ingesting and digesting millions of terrabytes of articles, scripts, images, conversations, research papers and other kinds of knowledge has helped it learn and scale up to a level where it is getting more and more integrated in the way we work, and make decisions at work. But this knowledge was not created by AI. Nor was it created by the next wave of big tech companies that have capitalised on this knowledge. And the people who created the knowledge are rightfully indignant.

The recent lawsuit filed by The New York Times (NYT) against OpenAI and Microsoft marks a speedbreaker in the evolving narrative of artificial intelligence, thrusting the legal, ethical, and moral quandaries of AI into the limelight. This law suit – says that the defendants (ChatGPT and Microsoft) should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” The NYT, which claims that Open AI used millions of articles by The Times without authorisation to train ChatGPT, not only questions the boundaries of copyright law but also raises profound concerns about the ethical implications of AI in content creation. The very real fear here is that Chat GPT trained on NYT’s articles will become so good at mimicking NYT that it will itself become a trusted source of news.

But it is not just news that AI is devouring. It is also chomping on research papers. And the results are astounding. Google Deep Mind’s new AI tool — GnoME — helped create 380,000 new stable crystals that are expected to power greener technology. AI has also revolutionised drug discovery and disease prediction, accelerating processes that once took years into mere days or even hours.

Sir Isaac Newton pointed out, “If I have seen further, it is by standing on the shoulders of giants.” And all science is built on the work of someone else who had discovered or invented something. The issue is not about AI discovering crystals, or cells, or drugs. The issue is about who owns the breakthroughs achieved through AI. While AI can process and analyse data at an unprecedented scale, the original research — often the product of years of human effort and scientific rigour — remains in a grey area of ownership and credit. As AI continues to break new ground in fields like genomics, neuroscience, and environmental science, the debate over intellectual property rights, fair compensation, and recognition for the scientists and researchers whose work forms the backbone of these AI-driven discoveries will only intensify.

There is a growing realisation that the current patchwork of national regulations is insufficient to address the global impact of AI. This calls for international collaboration to establish guidelines that not only foster innovation and harness the potential of AI, but also protect the rights and contributions of individuals and communities worldwide. The need of the hour is a global harmonised regulatory framework that respects individual autonomy and prevents exploitation. The challenge for regulators will be to find that fine balance that promotes technological advancement and economic growth, without compromising on basic principles of ownership.

As we move into the second year of AI deployment, there is a fundamental question that is staring us in the face. In a future increasingly shaped by artificial intelligence, what will it mean to be human in a world where our creations not only replicate but potentially surpass our own capabilities? For humankind it is about the redefining of our role in the world — if AI can do everything we can at work, what should we do?

The writer works at the intersection of digital content, technology, and audiences. She is a writer, columnist, visiting faculty, and filmmaker

Published on: Saturday, December 30, 2023, 06:00 AM IST

RECENT STORIES