At an event venue in downtown San Francisco on Monday, OpenAI, creator of ChatGPT and GPT-4, held its first ever developer conference, Dev Day. The company introduced GPTs, a way to easily integrate customized versions of ChatGPT, a store to find and buy custom ChatGPTs, a “Helper” API to make it easier for developers to call specific functions of their applications, and many other features and improvements. .
The only swag so far is a bunch of pins with the OpenAI logo and labels like “Engineering” and “research” and “Go to market.” Another set of pins represents personal pronouns.
First, Sam Altman took the stage and recounted various important events: ChatGPT last year, followed by GPT-4, which is “the most powerful model”.
The company disclosed that it has more than 2 million developers building on its APIs for “multiple use cases,” as well as 92% of Fortune 500 companies. ChatGPT itself gets about 100 million active users every week, the company said.
Also: MacBook Pro (M3 Max) review: The desktop-class laptop for the age powered by AI
Altman then went into a frenzy of many new announcements. Various announcements received lively applause — a happy, enthusiastic crowd.
Altman brought a special guest on stage: Satya Nadella, CEO of Microsoft. Altman jokingly asked Nadella, “How does Microsoft think about collaboration?” That drew laughter from Nadella and the audience.
“Guys, it’s a magical thing,” said Nadella. The partnership has “dramatically changed” the “structure” of Microsoft’s Azure cloud computing service, Nadella said. “Our job is to make the best possible system so you can build the best models,” added Nadella.
“The first thing we were doing in collaboration with you was to create a plan,” said Nadella. “We want to build our assistant as developers on the OpenAI API.”
“A couple of things are going to be very key for us,” Nadella said. “We intend to be fully committed to ensuring that you not only have the best models but also the best computers,” said Nadella. “Our mission is to empower everyone.”
“I always think of Microsoft as a platform company, an engineering company, and a partnership company,” Nadella said. “The programs that are needed as you move forward on your road require us to be at the top of our game,” said Nadella. He added that the common goal of these two companies is to equip everyone in every organization in the world to achieve more,” said Nadella.
In response to Nadella, Altman noted, “I’m glad we built AGI together,” meaning, artificial general intelligence, the idea of computers that can match the power of human thinking.
Key product and technology issues include:
- GPTs: custom versions of ChatGPT that OpenAI says “anyone can easily build” to perform specific tasks. The company offers the first two custom GPTs, Canva and Zapier AI, for the popular design app and workflow software, respectively. The company plans to offer more GPTs;
- GPT Store: Later in November, OpenAI will open a GPT Store, to find GPTs that others have built, where developers can earn money for their creations;
- Copyright Shield to cover customer protection costs;
- GPT-4 tuning service for developers;
- A custom modeling program for businesses, the OpenAI research team will work with “selected organizations” to “train custom GPT-4 in their specific domain”;
- ChatGPT’s new user interface, a simple, dark background with the OpenAI logo, and the phrase, “How can I help you today?” The new user interface will make it easier to switch between ChatGPT and DALL-E, a graphics creation program from OpenAI, the company said;
- The GPT-4 database advances to April 2023, a big step beyond the general limit set in the September 2021 schedule. ChatGPT also gains the ability to search PDFs and other documents;
- The GPT-4 program gets the “content window,” the amount of input that can be considered when making a response, quadrupled, from 32,000 to 128,000, in the new “Turbo” version of the program. (For more on the various features of GPT models, see the OpenAI Web site.);
- The GPT-4 Turbo can now accept images as part of the input, and can produce “human quality speech” as its output;
- Assistant API: a function invocation that makes it easy for developers to plug certain “assistant” functions into their apps, such as “a natural language-based data analysis app, a coding assistant, an AI-powered vacation planner, a voice-DJ controlled, intelligent visual canvas.”
- The new “seed” parameter makes GPT return “multiplicative output” “most of the time”;
- The new version of GPT-3.5 Turbo which gains greater functionality management and JSON management;
- Cut the price on GPT-4 Turbo and GPT-3.5 Turbo, based on the amount per input and output tokens, and double the rate of “tokens per minute” that can be used.
Altman onstage showed GPTs, writing from scratch a program called Startup Mentor, a mentoring program for entrepreneurs. He demonstrated by uploading a speech file that he presented, as an example of importing from external sources. This program is designed to answer questions like, “What are the three things you should look for when hiring people to start up?”
Said Altman of the custom models, “We can’t do this with many companies to begin with, and it’s not going to be cheap.”
The Copyright Shield program, Altman said, “means we’re going to step in to protect our customers” in the event of a lawsuit, “and contain the costs.”
Also: Supervising productive AI: New software leadership roles are emerging
OpenAI says the main feature of the Assistant API is “persistent threading,” which “allows developers to avoid resending the entire chat history with every new message and work around the limits of the content window.”
The new GPT-4 Turbo can be accessed immediately in preview form, OpenAI said, by running the gpt-4-1106-preview command in the OpenAI API. A stable version is scheduled to be released “in the coming weeks.”
GPT-4 Turbo does a better job, the company says, of following specific instructions, such as extracting the response from XML. It also gains the power of responses to JSON with a new parameter, “response_format.”
Also: Robotics and productive AI: Everything you need to know if they work as one
The new GPT-4 seed parameter is a beta feature “useful for use cases such as replaying debugging applications, writing extensive unit tests and generally having a higher level of control over model behavior,” the company said.
The announced price cut means that the GPT-4 Turbo, for example, is now a penny per input token, compared to three cents previously, and three cents per output token, compared to six cents previously — which the company bills as “three times cheaper.” and ” twice as cheap,” respectively.