✨ Gate Post New Year Giveaway - Show Your 2025 Crypto Flag and Win $200 Rewards!
💰 Select 10 high-quality posters, each will receive a $10 reward
How to Join:
1️⃣ Follow Gate_Post
2️⃣ Post with #2025CryptoFlag# hashtag, share your 2025 crypto flag and reasons
3️⃣ Post must be at least 60 words and receive at least 3 likes
Post Examples:
🔹 Investment Goals: What are your crypto goals for 2025?
🔹 Trading Strategy: What strategies will you adopt in 2025?
🔹 Personal Growth: What new crypto knowledge or skills will you learn in 2025?
🔹 Community Sharing: How will you boost your influence on Ga
Appearing at the NVIDIA conference, why did NEAR inexplicably become the head public chain of AI?
Original Author: Haotian (X: @tmel0211)
Recently, the news that NEAR founder @ilblackdragon will appear at the NVIDIA AI conference has made the NEAR public chain earn enough eyeballs, and the market price trend is also gratifying. Many friends are puzzled, isn't the NEAR chain All in doing chain abstraction, how can it become the AI head public chain inexplicably? Next, share my observations, and by the way, popularize some AI model training knowledge:
NEAR founder Illia Polosukhin has a long background in AI and is a co-builder of the Transformer architecture. The Transformer architecture is the infrastructure for today's LLMs large language models to train ChatGPT, which is enough to prove that the boss of NEAR did have experience in creating and leading AI large model systems before founding NEAR.
NRAR has launched NEAR Tasks at NEARCON 2023, with the goal of training and improving AI models. After the task is completed, the platform will reward users with NEAR tokens, and the manually annotated data will be used to train the corresponding AI model.
For example, if the AI model needs to improve its ability to recognize objects in images, the Vendor can upload a large number of original images with different objects to the Tasks platform, and then the user can manually mark the position of the objects on the image, and then generate a large amount of "image-object location" data, which AI can use to learn on its own to improve image recognition capabilities.
At first glance, doesn't NEAR Tasks just want to socialize artificial engineering to provide basic services for AI models, but is it really that important?
Typically, a complete AI model training includes data collection, data preprocessing and annotation, model design and training, model tuning, fine-tuning, model validation and testing, model deployment, model monitoring and updating, and so on.
Obviously, most people understand that the machine part is significantly larger than the human part, after all, it seems more high-tech, but in reality, human annotation is crucial in the whole model training.
Manual annotation can add labels to objects (people, places, things) in the image for the computer to improve the learning of the visual model; manual annotation can also convert the content in the speech into text, and annotate specific syllables, words and phrases to help the computer train the speech recognition model; manual annotation can also add some emotional labels such as happiness, sadness, and anger to the text, so that artificial intelligence can enhance sentiment analysis skills, etc.
It is not difficult to see that manual annotation is the basis for machines to carry out deep learning models, and without high-quality annotation data, the model cannot learn efficiently, and if the amount of annotated data is not large enough, the model performance will also be limited.
At present, there are many vertical directions for secondary fine-tuning or special training based on ChatGPT large models in the field of AI minimally invasive, which are essentially based on OpenAI's data, adding new data sources, especially manually annotated data, to perform model training.
For example, if a medical company wants to do model training based on medical imaging AI and provide a set of online AI consultation services for hospitals, it only needs to upload a large amount of raw medical image data to the Task platform, and then let users annotate and complete the task, which will generate manual annotation data, and then fine-tune and optimize the ChatGPT large model, which will make this general AI tool an expert in the vertical field.
However, it is obviously not enough for NEAR to become the leader of the AI public chain just by relying on the Tasks platform, NEAR is actually also carrying out AI Agent services in the ecosystem, which is used to automate all on-chain behaviors and operations of users, and users can freely buy and sell assets in the market as long as they are authorized. This is a bit similar to Intent-centric, which uses AI to automate execution to improve the user's on-chain interaction experience. In addition, NEAR's powerful DA capabilities allow it to play a role in the traceability of AI data sources, tracking the validity and authenticity of AI model training data.
In short, backed by high-performance chain functions, NEAR's technical extension and narrative guidance in the direction of AI seems to be much more ambiguous than pure chain abstraction.
Half a month ago, when I was analyzing the NRAR chain abstraction, I saw the advantages of NEAR chain performance + the team's super web2 resource integration ability.
Note: The long-term focus still depends on NEAR's layout and product promotion on "chain abstraction", AI will be a good plus and bull market catalyst!
Link to original article