Thursday, December 22, 2022

I Have The Feeling Last Week Saw A Real Step Change In The Place Of AI In Our World.

This appeared last week:

ChatGPT: Finally, an AI chatbot worth talking to

The newest research project from OpenAI is actually interesting — and already useful. Before long, AI like this will transform business communications.

Contributing Columnist, Computerworld | 8 December 2022 22:00 AEDT

AI chatbot experts are all talking about — and talking to — a newish research project from artificial intelligence research organization OpenAI. It’s called ChatGPT.

(OpenAI, a San Francisco-based non-profit AI research lab founded in December 2015, is the same organization behind the DALL-E image generation technology.)

Conceptually, ChatGPT can be used like the AI art tools in the sense that minimal text input by the user produces credible synthetic media — paragraphs instead of images. In fact, it can write convincing, often compelling essays, stories and even poems. And, like the AI image creators, you can direct ChatGPT to write prose in specific styles.

I told ChatGPT to tell me about Twitter in three separate queries: one in the style of Ernest Hemingway, another in the style of Mark Twain and the third in the form of a limerick. The results were radically different, and moderately good attempts, though not quite right in any of the results.

ChatGPT excels at detecting context, which makes its natural language processing (NLP) uncannily good. Google’s DeepMind branch contributed to the model with an approach called reinforcement learning with human feedback to create its dialogue language model (LM).

It’s able to understand context and give factual information from the same source: The knowledge that’s part of its language model.

ChatGPT and future AI conversation engines like it can be used for education, research, and other uses. But it’s also a glimpse at the future of business communication, marketing, and media.

I wrote last month about a prediction that 90% of all online content may be synthetic media within four years. After exploring ChatGPT, I’m thinking that may be an underestimate.

Consider: A business blogger can direct ChatGPT to write a blog post on a certain subject and use DALL-E to create an illustration. All this can be done in a couple of minutes. ChatGPT is a better writer than 99% of bloggers out there, and the image is royalty free. One person using just these two tools could write 20 or 30 blog posts per hour. People who don’t even speak English as their first language could write flawless prose for public consumption.

ChatGPT is also far better than other writing tools on the market. Instead of writing a long email, you could just tell ChatGPT to do it. It can write marketing copy, reports — you name it.

This can all be done now. Imagine what additional improvements will enable.

There’s just one problem.

The trouble with chatbots

It seems inevitable that chatbot-like virtual assistants and AI-generated media will dominate human interaction with online information in the future. It’s easy to imagine augmented reality smartglasses, for example, that enable you to interact with an agent and get facts, data, advice, guidance and more in words and images.

Ask a question, get an answer. It sounds better than today’s search engine model, where you ask a question and get thousands of links to consider. But when will search engines just give us the answer?

In fact, Google has been working on developing this capability for years. Search engine expert Danny Sullivan calls it the “One True Answer” problem.

Google has been flirting with the “One True Answer” idea since the beginning, when it offered an “I’m feeling lucky” button, which skipped the list of links to results and instead took you directly to the top result.

More recently, Google has offered a “featured snippet” box that accompanies and sits to the top right of search results. The “featured snippet” is an excerpt from a result intended to answer the search query.

The danger is that, despite major advances in search technology, information-retrieving AI still makes huge errors. That includes Google Search — and ChatGPT. And even when errors aren’t returned, results can be incomplete, arbitrary and biased. They can even return disinformation and political propaganda.

One disappointing example: Microsoft recently replaced many of its journalists with AI that selects and promotes news stories; ever since, MSN (which reaches nearly 1 billion readers per month) has been republishing fake news, disinformation, and bogus stories about mermaids, UFOs, and bigfoot. MSN’s AI can’t tell when a story is obviously false and doesn’t care when it comes from an unreliable source.

It’s tempting to say that because Microsoft donated $1 billion to the OpenAI organization, maybe it should use ChatGPT for MSN story selection. But ChatGPT makes mistakes, too. It occasionally trots out total falsehoods, conspiracy theories, and disinformation with all the confidence and authority that it does factually true information.

Still, it has to be said: ChatGPT is truly amazing.

Must more here:

https://www.computerworld.com/article/3682143/chatgpt-finally-an-ai-chatbot-worth-talking-to.html

This is also useful:

6:00am, Dec 12, 2022 Updated:

Alan Kohler: Yes, ChatGPT has changed the world

I’ve been playing around with ChatGPT for a few days.

It’s the new artificial intelligence product, released 10 days ago by OpenAI, that answers questions and has taken the tech world by storm (you can find it here and it’s free to use, at least for now).

My interest was piqued by this tweet from a senior research engineer at Microsoft, Shital Shah: “ChatGPT was dropped on us a bit over 24 hours. It’s like you wake up to the news of first nuclear explosion and you don’t know what to think about it but you know the world will never be the same again.”

Someone tweeted “Google is dead #ChatGPT”, and someone else wrote: “ChatGPT writes and thinks much better than the average college student IMO — it def undermines the purpose of the assignment.”

On Monday, Sam Altman, the CEO and co-founder (with Elon Musk) of OpenAI tweeted: “ChatGPT launched on Wednesday. Today it crossed one million users!”

OK, so something big is happening. What is it?

My first question to it was: “What is ChatGPT”.

Here’s the answer:

“ChatGPT is a large language model that has been trained to generate human-like responses to text input. It uses a technique called unsupervised learning, which involves training the model on a large amount of text data without providing it with any specific labels or categories. This allows the model to learn the patterns and structures of natural language on its own, and to generate responses that sound natural and fluent to a human reader.

“When you input text to ChatGPT, it uses its understanding of natural language to generate a response that is appropriate in the context of the conversation. It does this by considering the words and phrases that you have typed, and using its knowledge of language to generate a response that follows the rules of grammar and makes sense in the context of the conversation.

“Overall, ChatGPT is designed to provide a natural and intuitive way for users to have conversational interactions with a computer, without the need for explicit programming or instructions. It is a powerful tool for generating human-like responses to text input, and can be used in a variety of applications, such as chatbots, language translation, and more.”

Not bad.

Lots more here:

https://thenewdaily.com.au/finance/2022/12/12/alan-kohler-chatgpt-world-changed/

Given that this bot can actually use text input to create useful text output, on pretty much any topic,  it seems to me that in this capacity we have seen a step change that we are only at the beginning of!

Just where all this will lead is hard to know but I suspect it will be important!

What do you think?

For one thing we will need better ethical overlays:

Australia’s Artificial Intelligence Ethics Framework: Making Australia a global leader in responsible and inclusive AI

Nyman Gibson Miralis  Dennis Miralis

Australia December 8 2022

Artificial Intelligence (AI) has become a part of our daily lives, from organising our schedules using voice assistance to taking recommendations on the movies we should watch. It can also predict and prevent the spread of bushfires.

Given our increasing reliance on AI, it is important to ensure that AI is safe, secure and reliable. The Australian Government’s Artificial Intelligence (AI) Ethics Framework outlines eight principles that will guide businesses and governments to responsibly design, develop and implement AI, to ensure that Australia becomes a global leader in responsible and inclusive AI.

Australia’s AI Ethics Principles

Australia’s eight AI Ethics Principles will help:

  • Achieve safer, more reliable and fairer outcomes for all Australians.
  • Reduce the risk of negative impact on those affected by AI applications.
  • Businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.

The voluntary principles that are recommended to be followed throughout the AI lifecycle are outlined below.

More here:

https://www.lexology.com/library/detail.aspx?g=a9203caa-1484-40ad-9f38-dbbf189772f5

More to come on all this I am sure!

David.

1 comment:

  1. It’s the use of the word “open” that concerns me. It suggests it’s open source which it very much is not. A bit like countries that use “democratic” in the title.

    Still amazing what is being created.

    ReplyDelete