Stability AI has open sourced its ChatGPT model, which supports both text-based question answering and code generation.

On April 19th, StabilityAI announced on its official blog the release of a new large language model called StableLM. This is a ChatGPT-like model that supports text question answering, creative writing, code generation, and other functionalities.

It is reported that StableLM is available in two versions with 3 billion and 7 billion parameters, and will also release versions with 15 billion, 65 billion, and 175 billion parameters in the future. The model allows commercialization but must adhere to the terms of the CCBY-SA-4.0 license.

It supports Chinese, but deviations may occur and the model is still undergoing continuous optimization. StableLM has already surpassed 3,000 stars on GitHub within just 10 hours of its release, and its high performance and low resource usage make it very suitable for small and medium-sized enterprises and individual developers, and can even run on a regular laptop.

The open source address is https://github.com/stability-AI/stableLM/ and the testing address is https://huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chat.

Stability AI has open sourced its ChatGPT model, which supports both text-based question answering and code generation

StabilityAI is the developer of the globally renowned text-generating image platform StableDiffusion, and its main competitor is Midjourney.Meanwhile, StabilityAI is also one of the earliest open-source diffusion model platforms. Now by releasing StableLM, it has entered the field of large language models, laying the foundation for StabilityAI to launch multi-modal models similar to GPT-4 in the future.

The highlights of the StableLM technology

include support for Chinese: “AIGC Open Community” has used StableLM in an online demo for question answering, text creation, and story writing in Chinese. However, sometimes there may be deviations and low understanding.

It also supports commercialization: StableLM is developed based on the EleutherAI model, including GPT-J, GPT-NeoX, and Pythia Suite.

Among them, EleutherAI’s model can be used for commercial purposes. For example, Dolly2.0 that also supports commercialization is developed based on the same model. Currently, all models based on the LLaMA open source have a clear fatal flaw: they cannot be commercialized.

Small parameters, big training: StableLM has only 30 billion and 70 billion parameters, but the training data reaches 1.5 trillion tokens, and the robot’s answers cover a wide range. The StableLM-Alpha model is trained on a new dataset based on ThePile, including Wikipedia, GitHub, website crawling, physics, mathematics, medicine, and chat records.

Fine-tuning: In order to improve the accuracy of the StableLM model, Alpaca, gpt4all, RyokoAI, and Dolly core training data were used to reduce issues such as poor understanding, nonsense, discrimination, and ethics.

Low resource consumption: The resource consumption of large language models is proportional to their parameters. For example, the 175 billion parameters of GPT-3.5 require a massive computing cluster to use. StableLM has undergone extensive optimization based on small parameters, making it usable on everyday laptops. A more powerful cloud service can enhance the experience.

Stable iteration: StabilityAI is one of EleutherAI’s sensible and main sponsors, possessing strong technological development and functional iteration capabilities, providing a guarantee for product innovation.

Strong compatibility: The StableLM model has superior compatibility and does not require specific devices or chips to run, making it usable on both everyday laptops and private cloud servers.

Responsible and easy-to-use open source model:

StabilityAI considers data security, user privacy, and ethics as the top priorities in technology research and development of large language models.

StabilityAI hopes to involve more scholars, businesses, and individual developers in large language model research through open source StableLM, working together to eliminate potential risks and make it one of the most user-friendly products in the open source field of large language models.

Regarding StabilityAI:

StabilityAI open-sourced StableDiffusion in August 2022 and released version 2.0 in November of the same year. Four of the top 10 image-based applications in the Apple App Store are supported by StableDiffusion.

Currently, StabilityAI has a large artificial intelligence-generated content community consisting of over 140,000 developers and seven research centers, placing it in a leading position in terms of functional innovation, pre-training, algorithm optimization, and data fine-tuning.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.