CHATBOT AI THAT DAZZLES
Chatbot AI seems the game of the name right now. Currently, with over 100 million users, OpenAI’s chatbot, ChatGPT, has set the record for the fastest-growing consumer application in history. Microsoft has invested $10 billion in the technology, planning to integrate it into its ecosystem of products, including Bing and Office 365. Hot on the heels of the Windows vendor, Google launched Bard, a chatbot based on its Language Model for Dialogue Application (LaMDA).
HOW TO LOSE $100 BILLION
Despite the hype, both ChatGPT and Bard have user shortcomings. A UK university student aiming for a first on an essay used ChatGPT to do the work. The result was a disappointing 2:2 grade. OpenAI has also admitted that its brainchild can produce biased answers and mixes fact and fiction. Alphabet, Google’s parent lost $100 billion in market value on 8 February after Bard shared inaccurate information in a promo video. So obviously these chatbots aren’t as dazzling as may seem. But mistakes aren’t the biggest problem. With ChatGPT and Bard, Microsoft and Google potentially risk grave peril of legal liability for every bit of misinformation and libel that their inventions create.
LEGALLY UNLAWFUL
Many are asking the question – is it too early to talk about AI and the law? And the answer from just as many is – it’s too late! Without getting into the nitty-gritty of the machine-learning algorithms underlying ChatGPT techniques, realise that all the content that these generative AI systems use was created by humans and scrapped from the web. Almost in the blink of an eye, the chatbot accesses 300 billion words systematically harvested from the internet including books, articles, websites and posts. ChatGPT also hones in on your personal information obtained without consent. This could be a blog post, a product review or a comment on an online article. Were we asked by OpenAI if it could use our data? The straight-up is no. Some say this is a blatant violation of privacy, especially for sensitive data used to identify us, our family members, or our location.
STOLEN DATA?
Looking at ChatGPT from a different legal perspective, OpenAI did not pay for the data it scraped from the internet. Nobody – individuals, website owners, companies or the government agencies that produced the content have been compensated. Does that seem fair considering OpenAI’s $29 billion valuation?
ON THIN ICE
Imagine that ChatGPT has created a piece of content that you love. So much so that you want to copyright it. You can forget it. At the moment, for work to enjoy copyright in the United States or the UK it must be created by a human. Even if the work could be copyrighted, what happens when ChatGPT generates duplicate or similar content for another user? Can the initial user sue others for copyright infringement? Margaret Esquenet, a partner at IP law firm Finnegan has this to say:
“Even assuming that the rightful copyright owner is the person whose queries generated the AI work, the concept of independent creation may preclude two parties whose queries generated the same work from being able to enforce rights against each other. Specifically, a successful copyright infringement claim requires proof of copying—independent creation is a complete defence. Under this hypothetical, neither party copied the other’s work, so no infringement claim is likely to succeed.”
WHO IS TO BLAME?
Given the nature of the beast, ChatGPT is capable of generating biased, harmful and damaging content. Who is to blame or legally responsible? IP lawyer, Michael Kelber poses this question
“An exact copy of protected work could create potential liability, which raises another question: who is liable, the creator of the AI — such as ChatGPT — or the user who posed the query?”
CHATBOT AI – A DANGEROUS BLACK BOX
ChatGPT, Bard and other forms of generative chatbot AI are fraught with concerns around data privacy, copyright and IP, compliance and even plagiarism. This situation is exacerbated by the fact that these chatbots operate on an input-output model. A user inputs a query and ChatGPT spews out a response. Even the experts at Microsoft and Google will be unable to predict the output. Put simply, nobody knows how these chatbots construct their outputs since neural networks are the quintessential black boxes. Take this extreme example – a user asks ChatGPT for a remedy to an illness. The chatbot provides inaccurate information and the user gets sick. Again, who is to blame? An army of lawyers will provide the answer in seconds. Being able to identify a problem only after the fact won’t be the best defence for Big Tech planning to implement public-facing generative AI.
AI YOU CAN TRUST
While we aren’t against using ChatGPT or the like, we are pointing out the immediate limitations of chatbot AI. But we have AI you can trust. As London’s #1 end-to-end business Cybersecurity and IT Support for SMEs, Zhero has been crushing IT chaos for more than 20 years. Our AI offering, Process IT Better, will accelerate your business by building and implementing the apps you need. With the ability to extend or customize the apps that you already use, Process IT Better also enhances productivity and innovation. Contact Zhero today and see how we deliver better IT faster.