Pass Fun Forum
User Info
Welcome, Guest. Please login or register.
July 06, 2025, 11:49:33 AM

Login with username, password and session length
Search:     Advanced search
News
NOTE:

NOW EVERY COUNTRY VISITOR CAN DOWNLOAD FROM SHARECASH LIKE USA - UK - CANADA & AUSTRALIA.
Forum Stats
49199 Posts in 48810 Topics by 137464 Members
Latest Member: CoralCarmi
* Home Help Search Login Register
Pass Fun Forum  |  Pass Fun Main Index  |  Introduction!!!  |  Open-R1: a Fully Open Reproduction Of DeepSeek-R1 0 Members and 1 Guest are viewing this topic. « previous next »
Pages: [1] Print
Author Topic: Open-R1: a Fully Open Reproduction Of DeepSeek-R1  (Read 1 times)
EzequielCr
Newbie
*

Karma: +0/-0
Posts: 17

Hello, I'm Ubaso Nwaozuzu of FootballInNigeria.com.ng, working alongside our experienced Editor Kelechi Abel. We launched this platform to provide detailed analysis of Nigerian football. Our team is focused on providing accurate soccer updates to o

View Profile WWW Email
« on: February 01, 2025, 02:15:17 PM »


Hey there! This post is an introduction to the task, not a claim that we've replicated R1 yet. We're integrating in the open, so as quickly as we have evaluation numbers, we'll share them. You can follow our development on Hugging Face and GitHub.


True, but it looks like there's absolutely nothing to be examined as of today. I assume the ultimate objective is to train a brand-new thinking model and after that use the same examination metrics as o1 and the DeepSeek-R1.


Well, there ought to be at least some sanity check and validation to make sure the model was trained properly.


Oh yes, if you are speaking about the examination number of deepseek's design it's coming really quickly!


As mentioned in the article there is no design called Open-R1 to check at all ... not yet anyhow. This is a blog detailing that Hugging face will take the R1 Deepseek design, work out how it was constructed as laid out in the paper and from what they released, and after that reproduce that procedure.


in truth this is quite much how science works ... A comes up with a strategy, discovery or development and it is tested by B, C and D to see if it is reproduceable. Thats been the foundation of research study now for a few centuries.


This blog is not saying they have currently done so ... Its a blog detailing an intent to begin training a model like R1 and calling it Open-R1.


Also DeepSeek-R1 was just released last week, and even in their paper they outlined the compute hours needed. While those are low calculate hours for a SOTA model this does not imply you can train stated design in a week. I 'd personally like to be able to train a transformer model in a week, but we may need to wait a while for that level of compute innovation.


So there are no standards for a design that has not been constructed yet right? As outlined in the blog site, and once again in reply to your concern.


However fear not, there is a GitHub Repo already and contributors (hell I may join myself), some prelim work done, and a master plan. A good starting position.


n
@edbeeching
 has assessed the released models currently


( src: https://x.com/edwardbeeching/status/1884273209136275742)


R1 just trained on o1 outputs, so collectively .../ s. This is what the new AI czars are stating


Hi! This blog site post is an intro to the project, not a claim that we have actually recreated R1 yet. We will totally share the missing piece when we have them, you can expect the designs and datasets to be upload in this Hugging Face org and the code to be in this GitHub repo


That's nice and important to comprehend this incredible buzz that does not have technical comprehension and explanation. Science has to do with recreation, and if they claim to be open, let them fullfill the open part.


Please do publish the training expense.


We will!


Excalidraw Hi n
@bojan2501
 thanks, we will indeed be working hard to make sure this training dish can work for little language designs on consumer hardware given that not everybody has a cluster of H100s in your home:-RRB- The tool we used for the images was Excalidraw! https://excalidraw.com


looking forward to it! WTF are your speaking about?


should be a joke


It's truly cool to see how the entire open source neighborhood comes together!


Ops ...


5.5 M is number press reporter in the deepseekv3 tech report (simply the training, not the experiment afaik), for R1 tough to approximate tbh however much less than 5.5 M imo


Historically, they have never ever launched code or datasets of their LLM training, so I wouldn't expect this time to be various. If they would release it that would be fantastic obviously!


Yes naturally!


So essentially you're asking to change existing censorship with another flavour of censorship?


The code for the designs are inside the design repositories, e.g. for V3: https://huggingface.co/deepseek-ai/DeepSeek-V3/blob/main/modeling_deepseek.py


Hello Team, I'm Ray Bernard, the author and developer of EQUATOR. My research study team will be working on a paper focused on duplicating specific elements of DeepSeek R1. Our objective is to recreate the cold start and supply your group with a dataset that includes COT and other techniques to support these efforts. We like to contribute our work to help. Please let me understand if you discover this helpful. Best, Ray Bernard https://www.facebook.com/groups/1186310571520299/


Where is the evaluation numbers? without it you can't call it recreation.






8 replies


True, but it looks like there's absolutely nothing to be examined as of today. I assume the ultimate goal is to train a new reasoning model and then use the very same assessment metrics as o1 and the DeepSeek-R1.


That's quite intriguing, I was asking myself why the concerns the author exposed here are not being asked by others? I believe the work they have done is memorable but at the same time I wonder why they would not put these missing pieces on if they are expected to be completely open.
Why even without recreation and understanding of the development they could impact a lot the marketplace in this way?






4 replies


Hi! This article is an introduction to the task, not a claim that we've reproduced R1 yet. We will absolutely share the missing piece when we have them, you can anticipate the designs and datasets to be upload in this Hugging Face org and the code to be in this GitHub repo


Interesting read, and it is excellent that we see more effort into this direction: more optimization and less brute force.
Also wonder what tool did the author use for creating step diagram.




2 replies


Excalidraw I'm so grateful that effort like this already exist, I'm gon na attempt to contribute:-RRB- 1 reply


eagerly anticipating it! So racist articel




2 replies


WTF are your talking about?


Awesome to have this open reproduction began!


For Step # 1 check out https://github.com/open-thoughts/open-thoughts!


https://x.com/ryanmart3n/status/1884284101265612856


Let's do this thing!


1 reply


It's truly cool to see how the entire open source community comes together!


Does anyone understand the actual training cost of r1? I can't discover it in the paper or the announcement post. Is the 6M expense reported by media just the number taken from v3's training cost?




2 replies


Ops ...


Has anyone asked the DeepSeek group to release their training data and code, or at least share them privately with an independent duplication job like this? Have they rejected such a request?


A faithful replication depends upon using the exact same dataset and hyperparameters. Otherwise, any major disparities with the released criteria would be tough to pin down-whether due to training data distinctions or the duplication approach itself.


1 reply


Historically, they have never ever launched code or datasets of their LLM training, so I wouldn't anticipate this time to be various. If they would launch it that would be amazing naturally!


In the meantime we have to make finest guess price quotes and see if we can get there ourselves.


You offer excellent replication procedure of Deepseek reasoning training. I will attempt something similar to it.


This is truly excellent info, can we tweak with particular usage case when code is launched?


1 reply


Yes naturally!


Please think about getting rid of biased, tainted or unaligned training information and make an effort to get rid of copyrighted works from the crawl from intake. This will make the model more functional. If you recycled anthropic curation checks, this might also assist, eliminate obviouslybiased data will likely include a great deal of worth. We do not desire another polluted, unaligned open source design, right? And no corporate would ever use deepseek or a design that recycles it, right?
We appreciate your work for the advantage of humanity, we hope.
Miike C from NJ


1 reply


So basically you're asking to change existing censorship with another flavour of censorship?


Can't wait! Hopefully the design will be uncensored but whatever you can do is alright! Love seeing open source structure itself up. I'm not smart adequate to actually help but I can contribute moral assistance lol


Hello guys, I am even simply looking for code for DeepSeek-V2, in order to totally comprehend multi-head hidden attention. You do not appear to have code in Hugging Face even for that. Or am I missing something? Don't see anything in src/transformers/models. MLA is not appropriately explained in their paper, so it would be crucial to have code for this.
Logged

My web site; ai
Pages: [1] Print 
« previous next »
 

Powered by SMF 2.0 RC1.2 | SMF © 2006–2009, Simple Machines LLC | Theme by nesianstyles | Buttons by Andrea