The AI startup lab co-founded by former OpenAI executives, Anthropic, recently launched their newest AI text-generating model Claude 2, to the public.
An evolution of the existing Claude (version 1.3), the latest tool is superior in many important areas like academic performance and code writing, plus it is said to be safer in terms of avoiding harmful content production and avoiding bias.
Let’s look at what this new AI model, which seeks to give ChatGPT a run for its money, can do.
You can try Claude for free (for U.S. and U.K. residents only) here.
The newly released Claude 2 is now available in beta in the U.S. and the U.K., on its website for public access, and through a limited-access, paid API service.
The new AI chatbot is capable of searching and analyzing text to create summaries, answer questions, or produce written content on different topics. The company describes it as a “friendly, enthusiast, honest personal assistant” tool.
Claude 2 was developed based on the progress of its predecessor, Claude 1.3, which was introduced in March 2023 but was available only for brands and businesses upon request.
This time, it’s been trained with more recent data ( from early 2023) sourced from licensed datasets, selected websites, and wilfully-supplied user data. A significant 90% of this data is in English.
As a result of improved training, user feedback and behavior, and some other tweaks, developers assure the current bot is superior to the previous version, albeit per their own metrics, not by a lot.
Claude 2’s results are only slightly better than the earlier model regarding academics, like results on the multiple-choice sections of the U.S. bar or medical licensing exams or improved math-solving skills – in all these aspects, the text generator scored just under 3% better than before.
Where it is significantly enhanced is regarding programming: Claude 2 scored little over 71% on the Codex Human Level Python coding text, whereas Claude 1.3 had only gotten 56%.
Talking about how Claude 2 surpasses the earlier version, Anthropic has said it is “twice as good” when it comes to avoiding the most common and pressing issues that all AI text generators –like its main competitors ChatGPT, Bing, or Bard, and like Claude 1.3 itself– face: hallucination (when the software responds to questions with made-up, incorrect, or out-of-context information), harmful bias –stemming from the human-made content the models are trained with–, and more importantly, the generation of potentially harmful responses of various nature: illegal, violence-inducing, hateful, mentally toxic, etc.
However, there is no clear information as to how this version is 2x better than the previous one when it comes to these problems, at least so far.
Furthermore, and more interestingly, Anthropic does not recommend the application of Claude 2 technology in platforms “where physical or mental health and well-being are involved” or in “high stakes situations where an incorrect answer would cause harm.”
For now, some big products are already leveraging Claude 2 through API access, such as the popular AI generative suite Jasper. And only time will tell how Anthropic and its “friendly AI chatbot” will do in the mass market and against competitor apps.
What do you think?
THE AUTHOR
Ivanna Attie
I am an experienced author with expertise in digital communication, stock media, design, and creative tools. I have closely followed and reported on AI developments in this field since its early days. I have gained valuable industry insight through my work with leading digital media professionals since 2014.
AI Secrets is a platform for tech decision-makers to learn about AI technology. Our team includes experts such as Amos Struck (20+ yrs ICT, Stock Photo, AI), Ivanna Attie (expert in digital comms, design, stock media), and more who share their views on AI.
Get AI news in your inbox & join thousands of engineers and managers using AI to boost sales and grow market share