• j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    ·
    2 days ago

    If China ever goes hardcore on open source across the board for all hardware and software, it would absolutely crush the present Western hegemony. It would be the most moral high ground move too.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      14 hours ago

      Funny how in “communist” China almost no code is open source. It’s almost like it was never about “the people” huh

    • milicent_bystandr@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 hours ago

      Depending on how well/thoroughly it handles it, that could also remove a lot of the worry of companies adding ccp-accessible backdoors.

    • vext01@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 days ago

      From what ive heard, these risc v chips have a long way to go for performance. Is that still true?

      • j4k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        2 days ago

        Yes and no. It would have a long way to go for similar x86 single thread speeds. However, the future will belong to whatever single processor can handle all workloads. The dual processor workloads for GPU and CPU is a temporary hack. Around 8 years from now a new architecture will emerge as dominant. That is the 10 years it takes from idea to real hardware product in silicon. The problem has been obvious for 2 years already. The next architecture must be done from scratch on a level very similar to the gap between RISC-V and x86 right now. So ultimately, it is a no because that redesign renders the lead of the present useless. Present processors are power constrained for the L2 to L1 cache bus width. If all of those bits on the bus are high, it pulls the whole core down. This is where things are optimised for high speed single thread operations like traditional code. Large math tensors need a wide bus to load and offload quickly so it is entirely incompatible. Regardless of the merits of everyone running AI or not, in the data center business where there are very little profit margins, anyone that can make a single processor that can scale to handle both workloads well enough will win out in the long run. This dual processor paradigm has already been tried and failed. When x86 was in the x286 to x386 era a second floating point math unit was required for any advanced workloads like CAD. That created a dual processor architecture that resulted in a flop. Everyone in hardware is aware of this history. Why would anyone support a new grassroots proprietary hardware design for this new generation of hardware that requires a fortune in royalties if a similar processor is negligibly different at the same phase of development and is a free and open instruction set architecture with no royalties. Plus this means that the IC designer is no longer locked into an ecosystem of vendor peripherals. Anyone can design and sell little circuit blocks and on chip peripherals, even proprietary ones, for use on any chip. This is basically true open market capitalism for an ISA. It is a standardized framework for anyone to build on instead of the notoriously authoritarian, oppressive, and anticompetitive Intel. The outcome of that set of constraints seems obvious to me.

        • yetAnotherUser@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          Anyone can design and sell little circuit blocks and on chip peripherals, even proprietary ones, for use on any chip.

          What’s the likelihood of a dominant player emerging and implementing patented, proprietary RISC-V architecture changes which turn out to be necessary for high-performance? And if such a company gains sufficient market share, they could turn RISC-V into basically another x86-64 with many proprietary extensions. Sure, others could create their own RISC-V base processor - but if their performance is 500% lower than processors from the proprietary vendor who would purchase them?

          • j4k3@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            Zero. There are no secrets on silicon. Everything can be reverse engineered based on observation and lapping the die.

            • yetAnotherUser@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Sure, but not legally so. Patents are the reason there are only really two x86-64 vendors and a company could similarly patent their own RISC-V modifications.

              • j4k3@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                21 hours ago
                Not really. Reverse engineering has traditionally had limited value because of the exponential growth of silicon and the time it takes to develop anything on a node.

                It is impossible to understate how insane CPU speeds are at the present. These are high speeds even for radio frequencies. Traces are no longer required to be connected at these speeds. Capacitive coupling becomes a major aspect of signal transmission; the entire RLC gamut is impactful. Anyone can, and likely every major player does, reverse engineer every major hardware design they can acquire. This will only allow one to understand what is now irrelevant in terms of design. The new chip in question will already be well into the process of amelioration of development costs.

                You see, the reason the chip costs so much initially is because of the cost of the node and all of the hardware involved in creating the chip. The tooling costs are immense now. The reason the price of hardware drops over time is because actually producing the chips is dirt cheap by comparison. The initial cost is a projection about how much time it will take to pay back the initial investment assuming the units sell in low volume. As that volume increases and the loans are paid back, the price is dropped to access more segments of the market at lower price points. There is a market saturation balance that must be maintained to keep the next generation viable. Each new product takes 10 years to create. This means a company like Nvidia has that (IIRC (prob not)) has something like an 18 month new major product release schedule Will have the next 6 generations of products already in various phases of planning and design. If you reverse engineer a publicly available product, you are largely reverse engineering the fab’s capabilities while also looking at a product that is 6 generations behind the real cutting edge. Not to mention, you are also never going to be successful releasing your reverse engineered product because you cannot sell it at a competitive initial price required to pay for the required tooling when the original product has already done so and can be sold for cost of manufacture with a small markup.

                The enormous difference between the cost of tooling and cost of actually making the chip is absolutely essential to understand in combination with exponential growth. This is what William Shockley realized in the 1950’s and convinced the US government to pursue. The entire modern world we all have known is due to this exponential growth. This growth is absolutely remarkable in human history as the only time a civilian business endeavor has out classed the economic growth created by the largest militaries of the world. This is what Shockley actually realized; that the military budget could not match the growth potential of silicon up to the plateau of potential scaling when physics prevents further scaled nodes. There are only a few new nodes left, and those are the next 10 years. This means we are already at the end of scaling because hardware designers are actually already there. The concept of Venture Capital is actually a pseudo extension of military budgeting constraints in a round about way. Indeed, Silicon Valley is where the real battle of the cold war happened. The Soviet Union and China failed to realize how silicon scaling would alter and become a decisive military operation. Once that ball started rolling, it is impossible to catch the front line so long as the design edge is kept a closely guarded secret and the extreme capital required is too high to be viable.

                This is the real reason why your mobile devices are all running orphaned kernels on undocumented hardware. Your need to buy new devices when you are told is the primary factor driving these new nodes. You are less likely to realize that this is ultimately a tax to avoid large scale wars and major conflicts. This is why the changes happening right now in politics are not trivial. Moves are being made that appear similar to the era before venture capital and the Pax-Silicon™. We are already at the end of silicon exponential growth. There is no replacement for the exponential growth of silicon to outstrip military based spending and growth as has been the standard for all of the rest of human history. This will inevitably lead to hording scarcity and conflicts. The enormity of funding will cause the use of militaries to press advantages before they disappear. These are the factors that created world wars of the past and they will arise again. The next era of technology is going to be biology, but we are at least a couple of centuries away from biology as an engineering science where something like a synthetic brain is capable of producing a Turing complete deterministic computer on par with a CPU of the present. There is no clear path to exponential growth either like there was with scaling silicon. Perhaps the software organization, libraries, and database scaling will be an exponential growth factor, but I don’t know how that will have a barrier of entry on par with silicon in a way that is insurmountable by everyone including large governments and coalitions of governments.

                So this is the world that is changing around the issue of RISC-V. It will come into its own in a post VC growth era. We are at the end of that growth already. Reverse engineering becomes relevant now and proprietary secretive strategy is no longer of the same magnitude of military significance. My narrative here will become more and more obvious with time. This has been a years long curiosity between many of my interests like why the world prior the the 1950’s was so different, why the USA won the cold war, and understanding the history of the microprocessor to wrap my head around all the peripherals present in an Arduino which was born out of a desire to learn Megasquirt when I was a hotrod car nut ages ago. I abstract across broad spaces well like this and like to simplify complexity because I’m dumb like that.

                Things like patents are just weapons of the super rich. They have no real relevance here. The outcome of these cases has nothing to do with justice or right and wrong. These are battlefronts of militaries with convoluted rules of engagement. In the next 10 years this proxy conflict space will be abandoned and everything changes into an unknown state. Likely new silicon will become far too expensive to create and incremental nonsense will give way to more nuanced innovations. There will also be a lot of very expensive products for the super rich and only scraps for the plebs as there is no reason to scale pricing over time instead of branding perception based marketing of exclusivity for the elite.

                • qzrt@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  13 hours ago

                  I feel there’s a lot here that is misunderstood…

                  Indeed, Silicon Valley is where the real battle of the cold war happened. The Soviet Union and China failed to realize how silicon scaling would alter and become a decisive military operation.

                  If we are talking about cold war era, in 1991 with the end of the cold war with the dissolution of the Soviet Union, China had a lower GDP than North Korea. China wasn’t even capable of being part of the “war” for silicon at this point. China’s GDP has grown x46 since 1991, compared to the US that’s only grown x4.

                  Once that ball started rolling, it is impossible to catch the front line so long as the design edge is kept a closely guarded secret and the extreme capital required is too high to be viable.

                  That’s not the case at all, look at Intel it use to be at the bleeding edge and now it is on the brink of bankruptcy. The US isn’t at the forefront of silicon either, maybe for some designs like with Nvidia/AMD but that’s only because until recently GPUs were only used for games. Now they have an actual application for the military and other fields.

                  The company that makes all of these chips possible is a Dutch company, ASML. Without them nothing would have been possible. They create the very expensive and very large (40 freight containers, 3 cargo planes, and 20 truck loads for 1 machine) tools that are used for creating the microchips. The only benefit that this has on the US is that the Dutch are an ally of the US and are thus required to follow US policy. That is the only thing keeping China from more advanced nodes, the US banning the Dutch from selling their tools to China. Every company that makes the actual nodes TSMC, Samsung, and lastly Intel uses these machines from ASML. Where TSMC is the farthest ahead of any of the other 2, and Intel is by far the farthest behind even Samsung. TSMC also can’t sell their chips to China because of the same reason as the Dutch, Taiwan following US sanctions.

                  So really the only thing the US has going for it, isn’t some grand lead it had cause some guy from the 1950s apparently made some prediction. It is because of the power it holds over allies in preventing them from providing the same technology the few US companies depend on from other foreign countries. Which given the last 1-2 months the US seems dead set on losing all of these allies.

                  Still even with worse hardware, with all the sanctions the US has imposed, China was still able to create an AI model on par or better than the US’ best model. They are being forced to innovate instead of just throwing a crap ton of money at more GPUs and brute forcing it.

                  This is the real reason why your mobile devices are all running orphaned kernels on undocumented hardware.

                  You should look up LineageOS, you can request the Kernel for any android smartphone as is required by the licensing. I maintained one of the devices for a while, it is not an easy task to keep a device up to date with the latest kernel. The only ones that are really doing it are Intel and AMD, as I imagine there are a ton of x64 servers that run on Linux.

                  So this is the world that is changing around the issue of RISC-V. It will come into its own in a post VC growth era.

                  There’s still a lot of problems, RISC-V is an instruction set, and a very limited instruction set that doesn’t include a lot of things. It doesn’t include things like a GPU, or even vector instruction sets. It is very limited and almost everything else around it is going to be proprietary, including the chip design. There isn’t any chip design that is public except for maybe 2 that aren’t even actual designs, they are just theoretical. Cause again it is just the instruction set, it doesn’t include anything else, literally everything else you need like memory controllers, branch prediction, etc are all proprietary. All this does for China really is them having to create their own software on top of the hardware. They can just use Linux, which is more and more supporting RISC-V.

                  • j4k3@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    13 hours ago

                    The USA had trouble with fabs and the supporting infrastructure safety needed to transport the extremely hazardous chemicals. It happened to be convenient to outsource the fabs. It is all primarily funded by US based venture capital. These are not the nations in control of these assets like some kind of independent thing. If you look at how the transfers happened, it was all essentially done so that the US stays in control.

                    I spent a few months going down the rabbit hole of the computer history YT channel’s verbal history interviews. I’m aware that those likely had quite the American bias and all, but in aggregate there are a lot of stories describing how this played out from the people that were involved. There were also several interviews I watched that go into various military aspects that are quite interesting. It has been around 8 years since I went down that rabbit hole. So my memory is tinted. I’m good at remembering my abstracted simplifications but not the specific details.

                    My total understanding of hardware is kinda frozen around some parts of an ISA. Like I built Ben Eater’s bread board computer, but I struggle between pipelines and out of order instructions, branching in FORTH/assembly, and wtf is going on with C, up until I get to Python which I can read and bash scripts like I prefer. I’m not quite as naive as I like to play, but pretty damn close.

                    I figure RISC-V will still play the baseline in the future. If new nodes are not possible, the present model of royalties will not hold up. Standardization will be good for everyone. The last time I watched a RISC-V conference was probably around 2021, but it looked really solid then. Most of the old guard like Intel were major financial contributors to RISC-V at that time.

        • enumerator4829@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          I agree with you, mostly. Margins in the datacenter are thin for some players. Not Nvidia, they are at like 60% pure profit per chip, including software and RnD. That will have an effect on how we design stuff in the next few years.

          I think we’ll need both ”GPU” and traditional CPUs for the foreseeable future. GPU-style for bandwidth or compute constrained workloads and CPU-style for latency sensitive workloads or pointer chasing. Now, I do think we’ll slap them both on top of the same memory, APU-style á la MI300A.

          That is, as long as x86 has the single-threaded advantage, RISC-V won’t take over that marked, and as long as GPUs have higher bandwidth, RISC-V won’t take over that market.

          Finally, I doubt we’ll see a performant RISC-V chip from China the next decade - they simply lack the EUV fabs. From outside of China, maybe, but the demand isn’t nearly as large.

          • bruhduh@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 hours ago

            AMD APU approach already bearing fruit, PlayStation, Xbox, steam deck, strix halo, and their instinct datacenter cards, show that one chip approach is good as you’ve said

          • j4k3@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Open AI is showing holes in the armor already. Open source always wins in the long term. There are many attempts to limit RISC-V adoption but if you look, even the old guard is putting chips on this board.

            Having half a data center on a different architecture and load is untenable. Nvidia got lucky and is in a good position but that will only last 6-8 years at most. It is likely far less when China takes Taiwan and NK attacks SK at the same time. Nvidia has nothing without TSMC in Taiwan. That will leave only Intel and they are a train wreck that is relying on TSMC too. All of the Chip-Act fabs are trailing edge by the time they come online so those won’t save Nvidia either. This is what the US voted for; massive taxifs and WW3 by 2030.

            It will end up just like with AI in China. They are more agile and capable than the West imagines. They will pivot the chip limitations into the future. All of the American hegemony is based on layers upon layers of anticompetitive stagnation. Once those walls come down the future will move more quickly. All of these US companies are traitors as far as I am concerned. They outsourced at the expense of their neighbors and country. There are hundreds of thousands of homeless people in the USA. We have neo feudalism largely thanks to these shit companies. I hope they all crash and burn and will gladly buy Chinese.

            Also, with the current posturing of the USA towards Europe, EUV may become much more available in China. The Chinese look a whole lot less like stupid fascist Nazis than the US does now. We are the ones creating massive human rights violations and burning down the world in rancid stupidity. There is no moral ground to stand on do don’t expect ASML and the Dutch to feel all warm and fuzzy about US loyalties.

      • notthebees@reddthat.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        Yes, but it doesn’t have arm levels of growing pains.

        Current risc v SBCs are about 10 years behind performance wise, which isn’t as much of a problem. Core count is there, just not single core performance.

        • zarenki@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 hours ago

          I’m only aware of one RISC-V system where I can say the core count is there: the Milk-V Pioneer board and its 64-core SG2042 processor from two years ago. It’s comparable in price to a 64-core ARM Ampere CPU+motherboard (USD$1500 for the board), which seems somewhat reasonable when not considering the performance of each core. Hopefully the C930 core described in this article leads to more systems that aim for multi-core performance.

          Most RISC-V development boards are only 4 cores or fewer, with just a few popping up in the last year with 8 cores and nothing higher besides the SG2042. The best single-core RISC-V performance so far is on the SiFive P550 but it’s only 4 cores and comes on a development board that costs USD$500 (plus another $150 for tariffs if shipping to the US). You could easily get a 12-core AMD CPU and motherboard combo for less than that.

          • notthebees@reddthat.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            There’s also the deep computing mainboard from framework. Also that p550 uses a new CPU while the mainboard uses a jh710.

            The p550 is less rpi and more like those rockchip powered boards from radxa. (ignoring core count).

            • zarenki@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              I was only talking about high core count and high (relatively speaking) single-core performance. The DeepComputing Framework board is neither. Its JH7110 is only 4 cores and a rather old processor, which seems like an odd choice for a product releasing in 2025. At least the software support is great since distros have been working with VisionFive 2 and Milk-V Mars for years.

              It’s also the only currently-available Framework 13 board with fewer than 6 cores, though core count isn’t remotely comparable between architectures. At this price ($209 for lone board with 8GB RAM, $799 for full laptop) I’d prefer to see something at the very least comparable to SpacemiT K1, which has 8 cores and vector support, and is on the Banana Pi BPI-F3 (8GB version is $95).

      • gandalf_der_12te@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        i’ve heard rumors that they’ve made great surprises and developments so far; but i have nothing substantial to point you to, as it’s mostly rumors that i got from friends/other people at this point.