According to Bloomberg News, Apple is currently negotiating to integrate Google’s Gemini artificial intelligence engine into the iPhone. Sources familiar with the matter stated that these discussions involve licensing Gemini for upcoming features slated for the iPhone software this year. While Apple is developing new capabilities for its iOS 18 software update using its proprietary AI models, it is exploring partnerships to enhance generative AI functions, such as image creation or essay writing based on basic prompts, the insiders revealed. Apple also recently held discussions with Microsoft-backed OpenAI and considered using its model, the report added.
The negotiations, which have not been finalized, would significantly expand the current relationship between the two companies. They have an agreement for Google to be the default search engine on Apple’s Safari web browser. If the two sides reach a deal, Google could gain an advantage by spreading its AI technology to billions of iPhone users. However, it could also draw further antitrust scrutiny for tech giants.
As the world grapples with AI’s growing impact, many experts believe it will ultimately be possible to develop and deploy advanced capabilities only through significant partnerships with other companies. Tech companies are increasingly integrating advanced AI capabilities into their products by entering strategic agreements with specialized providers, such as Google and Amazon.
A deal between Apple and Google would set the stage for a significant shift in deploying AI. Companies are moving away from deploying on-device AI in favor of cloud-based generative algorithms, which can create text and image outputs on the fly without prior guidance. It would also likely encourage other AI providers, including Microsoft and ChatGPT parent OpenAI, to seek similar arrangements with Apple to expand their customer base and drive further innovation.
While this type of partnership has benefits, it can also raise ethical and privacy concerns. In particular, it can raise copyright issues involving the AI system’s output. These include whether prompts likely to produce infringing outputs should be blocked from processing, how copyright law applies to the resulting AI output, the allocation of liability between providers and users, notice and takedown procedures, and more.
This is not the first time AI developers have struggled to agree on provisions governing how they use data to train their AI systems. In the current context, these issues are only expected to intensify as the technology becomes more widespread, and companies struggle to navigate regulatory concerns over the ethical implications of AI development. As a result, the broader industry needs to work together to identify best practices and ensure that these standards are widely adopted.