A new artificial intelligence chatbot could make it much easier for students to cheat on tests and homework that require written answers.
Billed by technologists and industry watchers as the most powerful AI chatbot ever released, ChatGPT is the latest effort from OpenAI, a San Francisco-based company that also made tools like DALL-E 2, the image generator that made a splash earlier this year.
ChatGPT, which has been trained on a gigantic sample of text from the internet, can understand human language, conduct conversations with humans and generate detailed text that many have said is human-like and quite impressive.
‘We’ve trained a model called ChatGPT which interacts in a conversational way,’ OpenAI said in a statement. ‘The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.’
A new artificial intelligence chatbot – which more than a million people signed up to test in five days – could make it much easier for students to cheat on tests and homework that require written answers
Although ChatGPT has been released to the public for anyone to use, for free, the AI has been so popular that OpenAI had to temporarily shut down the demo link today. More than a million people signed up in the first five days it was released.
This type of AI could be misused in countless ways, from furthering misinformation and hateful content to stealing the copyrighted work of published authors and upending the entire education system.
Kevin Bryan, an associate professor of strategic management at the University of Toronto who ran an AI-based entrepreneurship program and follows the industry closely, said he was ‘shocked’ by the capabilities of ChatGPT after he tested it by having the AI write numerous exam answers.
‘You can no longer give take-home exams/homework,’ he said at the start of a thread detailing the AI’s abilities.
He asked the AI ‘whether a new cash-constrained auto startup will have trouble motivating suppliers with relational contracts, what they can do instead, and what it means for the boundaries of the firm.’
The results were deemed worthy of an A.
It’s worth noting that ChatGPT does not trawl the internet for answers in the model of Google Search, and it’s knowledge is restricted to things it learned before 2021. It is also prone to giving simplistic, more moderate responses.
OpenAI has programmed the bot to refuse ‘inappropriate requests’ – which includes requests for generating instructions for illegal activities, such as how to make a bomb.
Kevin Bryan, an associate professor of strategic management at the University of Toronto who ran an AI-based entrepreneurship program and follows the industry closely, said he was ‘shocked’ by the capabilities of ChatGPT
In assigning the AI various tasks, some of which involved combining knowledge across different areas, Bryan said it performed ‘frankly better than an average MBA’
In assigning the AI various tasks, some of which involved combining knowledge across different areas, Bryan said it performed ‘frankly better than an average MBA.’
Bryan also clarified for those reading his thread who are outside his specialty area: ‘None of the answers are “wrong” and many are fairly sophisticated in their reasoning about some of the most conceptually difficult content you would see in an intro strategy class. It is not just meaningless words!)’
ChatGPT even composed a tweet for Bryan, included in the thread, which Bryan noted was another sign of the technology’s amazing advancement.
‘What’s even wilder is that the rate of improvement is increasing,’ Bryan said on Twitter. ‘Right now, it doesn’t actively search the internet nor incorporate a proper mathematical engine, but both will absolutely be part of these models next year. 100% sure these models will be part of our workflow…’
However, not everyone is ready to hold a funeral for student essays.
In Plagiarism Today, Jonathan Bailey stated that the college essay – which has been declining in popularity for years – is in fact not dead.
‘Despite the challenges, there are still times when an essay is an appropriate assessment tool. Even if it ceases being the default or the gold standard, the essay will likely remain as a tool instructors use to assess student’s grasp of the material,’ Bailey wrote.
‘AI won’t be the death of the essay, but it may change it. It may change the prompts that are used, the receivables that need to be graded, and the general approach to the concept.’
What is OpenAI’s chatbot ChatGPT and what is it used for?
OpenAI states that their ChatGPT model, trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF), can simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises and reject inappropriate requests.
Initial development involved human AI trainers providing the model with conversations in which they played both sides – the user and an AI assistant. The version of the bot available for public testing attempts to understand questions posed by users and responds with in-depth answers resembling human-written text in a conversational format.
A tool like ChatGPT could be used in real-world applications such as digital marketing, online content creation, answering customer service queries or as some users have found, even to help debug code.
The bot can respond to a large range of questions while imitating human speaking styles.
A tool like ChatGPT could be used in real-world applications such as digital marketing, online content creation, answering customer service queries or as some users have found, even to help debug code
As with many AI-driven innovations, ChatGPT does not come without misgivings. OpenAI has acknowledged the tool´s tendency to respond with “plausible-sounding but incorrect or nonsensical answers”, an issue it considers challenging to fix.
AI technology can also perpetuate societal biases like those around race, gender and culture. Tech giants including Alphabet Inc’s Google and Amazon.com have previously acknowledged that some of their projects that experimented with AI were “ethically dicey” and had limitations. At several companies, humans had to step in and fix AI havoc.
Despite these concerns, AI research remains attractive. Venture capital investment in AI development and operations companies rose last year to nearly $13 billion, and $6 billion had poured in through October this year, according to data from PitchBook, a Seattle company tracking financings.