Siri Getting Smarter? Apple Claims Its AI Model Runs Circles Around GPT-4
In an Apple research paper titled "ReALM: Reference Resolution As Language Modeling, it states that "We also benchmark against GPT-3.5 and GPT-4, with our smallest model achieving performance comparable to that of GPT-4, and our larger models substantially outperforming it."
This will take into account items on a device screen, with context of what tasks are also actively in place. Basically, it allows assistants such as Siri to be much more aware of what the user is not only doing, but intends to do with prompts. It translates the visual aspects of what is on a screen to better understand the intention and overall picture.
Performance, according to Apple researchers, is on par or better at times than its leading OpenAI competition. One other aspect to notice here is that this technology is performed on-device. This is a very Apple-like approach which helps it keep its hardware and software closely knitted, but the best user experience. It also helps with privacy issues which are paramount for Apple's customers.
Apple is no stranger to trying new technologies, although it often does so after other mavericks have attempted similar trails. The Apple Vision Pro is a device that can benefit tremendously from AI, giving users an even better experience.
Apple's voice assistant, Siri, is in dire need of an upgrade. Widely regarded as trailing most of the field when it comes to voice assistants, it seems antiquated and ineffective for many more complex tasks. With the infusion of AI into its DNA, it appears that Siri may now start to develop at a faster and more competent pace, ultimately resulting in a much better user experience.
More details will emerge in Apple's yearly WWDC developer conference on June 10th, with iOS 18 software following soon after.