Apple Technical Papers Details of the model developed for power supply Apple Intelligencea range of generative AI features coming to iOS, macOS, and iPadOS in the coming months.
In this paper, Apple Accusation The company has reiterated that it takes an ethically questionable approach to training some of its models, does not use private user data, and relies on a combination of public and licensed data for Apple Intelligence.
“[The] “The pre-training datasets consist of data licensed from publishers, curated public or open-source datasets, and public information crawled by our web crawler, Applebot,” Apple wrote in the paper. “Because we are committed to protecting user privacy, please note that the data mix does not contain any private data of Apple users.”
July, ProofNews report Apple used a dataset called “The Pile,” which contains subtitles from hundreds of thousands of YouTube videos, to train a set of models designed for on-device processing. Many YouTube creators whose subtitles were caught in “The Pile” were unaware of this or consented to it. Apple later issued a statement saying it does not intend to use these models for AI features in its products.
Technical paper unveiling Apple’s first model World Development Congress 2024 The models, called Apple Foundation Models (AFM), announced in June, emphasize that the training data for the AFM models was obtained in a “responsible” manner — at least according to Apple’s definition of responsible.
The training data for the AFM model includes publicly available web data as well as data licensed from undisclosed publishers. According to The New York Times, Apple I contacted several publishers. Towards the end of 2023, the company plans to sign multi-year deals worth at least $50 million with media outlets including NBC, Condé Nast and IAC to train models on the publishers’ news archives. Apple’s AFM models were also trained on open source code hosted on GitHub, specifically Swift, Python, C, Objective-C, C++, JavaScript, Java and Go code.
Training a model with code without permission, even if it is open code, is illegal. Points of contention among developersSome developers have argued that some open-source codebases are unlicensed or don’t allow for AI training in their terms of use, but Apple said it had “license filtered” the code to only include repositories with minimal usage restrictions, such as the MIT, ISC, and Apache licenses.
To improve the mathematical skills of its AFM model, Apple specifically included math problems and answers from web pages, math forums, blogs, tutorials, and seminars in its training set, according to the paper. The company also leveraged “high-quality, publicly available” data sets (not named in the paper) with “licenses that allow them to be used to train the model,” and filtered them to remove sensitive information.
In total, the training data set for the AFM model is approximately 6.3 trillion tokens.token (These are bite-sized chunks of data that are typically easy for generative AI models to ingest.) By comparison, this is less than half the number of tokens (15 trillion) that Meta used to train its flagship text generation model. Llama 3.1 405B.
With additional data, including human feedback and synthetic data, Apple has fine-tuned the AFM model to mitigate undesirable behaviors, including: Exhale toxicity.
“Our models are designed to help users carry out their everyday activities with Apple products.
“This is a core value at Apple and we are rooted in responsible AI principles every step of the way,” the company said.
The paper has no conclusive evidence or shocking insights. It is the result of careful planning. Papers like this rarely come to light, not just because of competitive pressures, but also because disclosure is so stringent. Too There is a good chance that your company could get into legal trouble.
Some companies that scrape public web data to train their models argue that the practice is protected. Fair Use Doctrine. But it is Highly controversial The number of lawsuits is also on the rise.
In its paper, Apple says it lets webmasters block its crawlers from harvesting their data, but that puts individual creators in a tricky position: What can an artist do, for example, if their portfolio is hosted on a site that doesn’t block Apple from harvesting their data?
The legal battle will determine the fate of generative AI models and how they are trained, but for now, Apple is trying to position itself as an ethical player while avoiding unwanted legal scrutiny.