OpenAI CEO Sam Altman laid out an enormous imaginative and prescient for the way forward for ChatGPT at an AI occasion hosted by VC agency Sequoia earlier this month.
When requested by one attendee about how ChatGPT can change into extra personalised, Altman replied that he finally desires the mannequin to doc and keep in mind the whole lot in an individual’s life.
The best, he stated, is a “very tiny reasoning mannequin with a trillion tokens of context that you just put your entire life into.”
“This mannequin can motive throughout your entire context and do it effectively. And each dialog you’ve ever had in your life, each e book you’ve ever learn, each electronic mail you’ve ever learn, the whole lot you’ve ever checked out is in there, plus linked to all of your information from different sources. And your life simply retains appending to the context,” he described.
“Your organization simply does the identical factor for all of your firm’s information,” he added.
Altman might have some data-driven motive to suppose that is ChatGPT’s pure future. In that very same dialogue, when requested for cool methods younger folks use ChatGPT, he stated, “Folks in school use it as an working system.” They add information, join information sources, after which use “complicated prompts” in opposition to that information.
Moreover, with ChatGPT’s reminiscence choices — which may use earlier chats and memorized info as context — he stated one development he’s observed is that younger folks “don’t actually make life selections with out asking ChatGPT.”
“A gross oversimplification is: Older folks use ChatGPT as, like, a Google substitute,” he stated. “Folks of their 20s and 30s use it like a life advisor.”
It’s not a lot of a leap to see how ChatGPT may change into an all-knowing AI system. Paired with the brokers the Valley is presently attempting to construct, that’s an thrilling future to consider.
Think about your AI robotically scheduling your automotive’s oil adjustments and reminding you; planning the journey needed for an out-of-town marriage ceremony and ordering the present from the registry; or preordering the subsequent quantity of the e book sequence you’ve been studying for years.
However the scary half? How a lot ought to we belief a Huge Tech for-profit firm to know the whole lot about our lives? These are corporations that don’t all the time behave in mannequin methods.
Google, which started life with the motto “don’t be evil” misplaced a lawsuit within the U.S. that accused it of partaking in anticompetitive, monopolistic habits.
Chatbots may be educated to reply in politically motivated methods. Not solely have Chinese language bots been discovered to adjust to China’s censorship necessities however xAI’s chatbot Grok this week was randomly discussing a South African “white genocide” when folks requested it utterly unrelated questions. The habits, many famous, implied intentional manipulation of its response engine on the command of its South African-born founder, Elon Musk.
Final month, ChatGPT grew to become so agreeable it was downright sycophantic. Customers started sharing screenshots of the bot applauding problematic, even harmful selections and concepts. Altman shortly responded by promising the crew had mounted the tweak that triggered the issue.
Even one of the best, most dependable fashions nonetheless simply outright make stuff up every so often.
So, having an all-knowing AI assistant may assist our lives in methods we are able to solely start to see. However given Huge Tech’s lengthy historical past of iffy habits, that’s additionally a scenario ripe for misuse.