• Hacker News
  • new|
  • comments|
  • show|
  • ask|
  • jobs|
  • ur-whale 2 hours

    I've played with a bunch of RISC-V platforms, mostly SBCs in the raspi class

    Beyond the potential platform fragmentation due to the variability of the ISA (a very unfortunate design choice IMO), mentioned elsewhere in this thread, what I find most frustrating is the boot process / equivalent of BIOS in that world.

    My impression: complete lack of standardization, a ton of ad-hoc tools native to each vendor, a complete mess, especially when it comes to get the board to boot from devices the vendor didn't target (eg SSDs).

    Until two things happen:

    1. a CPU with a somewhat competitive compute power appears (so far, all the SBC's I've tried are way behind ARM and x86)

    2. a unified BOOT environment which supports a broad standard of devices to boot from (SSD, network, SD-Card, hard-drives, etc...)

    the whole RISC-V thing will remain a tiny niche thing, especially because when a vendor loses interest in the platform, all of the SW that is native to the platform goes to rot immediately (not that it was particularly good quality in the first place).

  • ljhsiung 13 hours

    > Enabling new business models

    This is true, but only for the bigger players. The nature of hardware still fundamentally favors scale and centralization. Every hyper-scalar eventually gets to a size that developing in-house CPU talent is just straight up better (Qcom and Ventana + Nuvia, Meta and Rivos, Google's been building their own team, Nvidia and Vera-Rubin, God help Microsoft though). This does not bode well for RISC-V companies, who are just being used as a stepping stone. See Anthropic, who does currently license but is rumored to develop their own in-house talent [1].

    > Extensibility powers technology innovation

    >> While this flexibility could cause problems for the software ecosystem...

    "While" is doing some incredible heavy lifting. It is not enough to be able to run Ubuntu, as may be sufficient for embedded applications, but to also be fast. Thusly, there are many hardcoded software optimizations just for a CPU, let alone ARM or x86. For RISC-V? Good luck coding up every permutation of an extension that exists, and even if it's lumped as RVA23, good luck parsing through 100 different "performance optimization manuals" from 100 different companies.

    > How mature is the software ecosystem?

    10 years ago, when RISC-V was invented, the founders said 20 years. 10 years later, I say 30 years.

    The nature of hardware as well, is that the competition (ARM) is not stationary as well. The reason for ARM's dominance now is the failure of Intel, and the strong-arming of Apple.

    I have worked in and on RISC-V chips for a number of years, and while I am still a believer that it is the theoretical end state, my estimates just feel like they're getting longer and longer.

    [1]: https://www.reuters.com/business/anthropic-weighs-building-i...

    adgjlsfhk1 4 hours

    > good luck parsing through 100 different "performance optimization manuals" from 100 different companies.

    Imo this is pretty misguided. If you're writing above assembly level, you can read the performance optimization manual for Intel, and that code will also be really fast on AMD (or even apple/graviton). At the assembly level, compilers need to know a little bit more, but mostly those are small details and if they get roughly the right metrics, the code they produce is pretty good.

  • Animats 13 hours

    Huh? that link returned:

        Your submission was sent successfully! Close
    
        Thank you for contacting us. A member of our team will be in touch shortly. Close
    
        You have successfully unsubscribed! Close
    
        Thank you for signing up for our newsletter!
        In these regular emails you will find the latest updates about Ubuntu 
        and upcoming events where you can meet our team. Close
    
        Your preferences have been successfully updated. Close notification
    
        Please try again or file a bug report. Close

    shakna 13 hours

    There's an email signup box on the right side on desktop, or bottom of the page on mobile. Maybe you somehow managed to hit it, or see it during some component update.

  • storus 11 hours

    Will RISC-V end up with the same (or even worse) platform fragmentation as ARM? Because of absence of any common platform standard we have phones that are only good for landfill once their support lifetime is up, drivers never getting upstreamed to Linux kernel (or upstreaming not even possible due to completely quixotic platforms and boot protocols each manufacturer creates). RISC-V allows even higher fragmentation in the portions of instruction sets each CPU supports, e.g. one manufacturer might decide MUL/DIV are not needed for their CPU etc. ("M" extension).

    tliltocatl 26 minutes

    PC/x86 was an extreme outlier, sadly, and it was because of Microsoft/Intel business model. The architecture details was historically mostly decided on by Wintel, yet the system integration was done by many vendors, whose best interest was to stay as compatible as possible. Its unlikely that another platform would be able to reach this state, the PC architecturing was subsidized from the M$ software monopoly that nobody would have wanted to suffer thru again.

    pjmlp 3 hours

    Just like everything else outside PC thanks to clones becoming a thing.

    One reason UNIX became widely adopted, besides being freely available versus the other OSes, was that allowed companies to abstract their hardware differences, offering some market differentiation, while keeping some common ground.

    Those phones common ground is called Android with Java/Kotlin/C/C++ userspace, folks should stop seeing them as GNU/Linux.

    bsder 7 hours

    > Will RISC-V end up with the same (or even worse) platform fragmentation as ARM?

    Sadly, yes. RISC-V vendors are repeating literally every single mistake that the ARM ecosystem made and then making even dumber ones.

    hajile 11 hours

    RVA23 is the standard target for compilers now. If you support newer stuff, it’ll take a while before software catches up (just like SVE in ARM or AVX in x86).

    If you try to make your own extensions, the standard compiler flags won’t be supporting it and it’ll probably be limited to your own software. If it’s actually good, you’ll have to get everyone on board with a shared, open design, then get it added to a future RVA standard.

    MobiusHorizons 6 hours

    Compiling the code is not the issue. The hard part is the system integration. Most notably the boot process and peripherals. It's not actually hard to compile code for any given ARM or x86 target. Even much less open ecosystems like IBM mainframes have free and open source compilers (eg GCC). The ISA is just how computation happens. But you have to boot the system, and get data in and out for the system to be actually useful, and pretty much all of that contains vendor specific quirks. Its really only the x86 world where that got so standardized across manufacturers, and that was mostly because people were initially trying to make compatible clones of the IBM PC.

    storus 11 hours

    Thanks, that however addresses only a part of the problem. ARM is also suffering from no boot/initialization standard where each manufacturer does it their own way instead of what PC had with BIOS or UEFI, making ARM devices incompatible with each other. I believe the same holds with RISC-V.

    Findecanor 10 hours

    There is a RISC-V Server Platform Spec [0] on the way supposed to standardise SBI, UEFI and ACPI for server chips, and it is expected to be ratified next month. (I have not read it myself yet)

    [0]: https://github.com/riscv-non-isa/riscv-server-platform

    hajile 10 hours

    There has been concerted effort to start working on these kinds of standards, but it takes time to develop and reach a consensus.

    Some stuff like BRS (Boot and Runtime Services Specification)and SBI (Supervisor Binary Interface) already exist.

    indolering 11 hours

    The answer is unequivocally yes: RISC-V is designed to be customizable and a vendor can put whatever they like into a given CPU. That being said, profiles and platform specs are designed to limit fragmentation. The modular design and core essential ISA also makes fat binaries much more straight-forward to implement than other ISAs.

    hajile 11 hours

    You can choose to develop proprietary extensions, but who’s going to use them?

    A great case study is the companies that implemented the pre-release vector standard in their chips.

    The final version is different in a few key ways. Despite substantial similarities to the ratified version, very few people are coding SIMD for those chips.

    If a proprietary extension does something actually useful to everyone, it’ll either be turned into an open standard or a new open standard will be created to replace it. In either case, it isn’t an issue.

    The only place I see proprietary extensions surviving is in the embedded space where they already do this kind of stuff, but even that seems to be the exception with the RISCV chips I’ve seen. Using standard compilers and tooling instead of a crappy custom toolchain (probably built on an old version of Eclipse) is just nicer (And cheaper for chip makers).

    LeFantome 6 hours

    Yes, extensions are perfect for embedded. But not just there.

    Extensions allow you to address specific customer needs, evolve specific use cases, and experiment. AI is another perfect fit. And the hyperscaler market is another one where the hardware and software may come from the same party and be designed to work together. Compatibility with the standard is great for toolchains and off-the-shelf software but there is no need for a hyperscaler or AI specific extension to be implemented by anybody else. If something more universally useful is discovered by one party, it can be added to a future standard profile.

  • mcdow 14 hours

    I’m looking forward to using a RISC-V computer in 20 years

    12 hours

    3abiton 14 hours

    While its current performance is not competitive, there are currently interesting options. I got the orange pi riscv version, mainly to test riscv while it's slow compared to other arm socs, it's still better than I expected. There are even risc v TPUs now.

    wg0 12 hours

    Donald Trump might make it five.

    aappleby 13 hours

    You're probably already using a RISC-V computer, it's just embedded as a supervisor in some other gadget (or vehicle) you own.

    themafia 10 hours

    I look forward to running my _own_ software on a RISC-V computer.

    bityard 12 hours

    I already have one! (But it's technically a soldering iron...)

    LeFantome 5 hours

    Pinecil?

    znpy 14 hours

    unironically, this.

    i've been hearing about arm computer for almost twenty years and only just recently general-purpose decently-priced arm laptops have been released (qualcomm laptops, the macbook neo).

    and arm desktop are still not a thing, in practice.

    heresie-dabord 11 hours

    > arm desktop are still not a thing

    The desktop market is not the only product space anymore.

    Apple has had brilliant success with its ARM processors, proving that ARM is more than capable. Before Apple's switch, Chromebooks had been using ARM since 2011.

    Android is the dominant operating system in mobile and most Android devices use the ARM platform. Many of these devices have desktop capability -- they are a viable convergence platform.

    andai 14 hours

    I think the Surface Laptops (2018?) count, and arguably the previous models (2012+) sorta-kinda count (tablet + keyboard).

    Side note: It's kinda funny to me that "the keyboard is detachable, the screen is glass and you can touch/write on it" makes it "lesser" than a laptop rather than being an upgrade.

    But yeah, definitely happy to see more in this space. Now we just need e-Paper laptops to take off as well :)

    mavhc 13 hours

    I have an ARM desktop from 1986 or 1987

    https://chrisacorns.computinghistory.org.uk/Computers/A500.h...

    znpy 32 minutes

    technically true, practically irrelevant.

    Joker_vD 14 hours

    Well, Apple M1/M2/etc. are, technically, ARMv8, and they're available as desktops.

    znpy 31 minutes

    they're not general-purpose in the sense that you can run any operating system nor they're decently priced.

    Joeboy 14 hours

    Also the Acorn Archimedes is, technically, an ARM / RISC desktop.

    bluebarbet 12 hours

    Distant memories of a 1980s London classroom.

    ninth_ant 13 hours

    This underestimates the will of governments and companies Europe and especially China to reduce their dependency on US-controlled technology.

    wk_end 12 hours

    ARM isn't US controlled, is it? British and also now Japanese since it's owned by SoftBank.

    Meanwhile, wouldn't China be more heavily invested in Longsoon?

    hajile 11 hours

    ARM is British (America’s closest ally) and proprietary. If you’re swapping, just eliminate the risk and cost entirely.

    LoongArch is 32-bit instructions only. This means no MCUs due to poor code density. That forces them into RISCV anyway at which point, you might as well pour all your money and dev time into one ISA instead of two. RISCV has way more worldwide investment meaning LoongArch looks like a losing horse in the long term when it comes to software.

    gggmaster 9 hours

    Quite the contrary, the fragmented ecosystem is holding RISC-V back.

    There are currently 3 variants of LoongArch ISA. The reduced 32-bit version targets MCUs. And LoongArch64 ATX/MATX motherboards with UEFI support is readily available. This makes it far more easier to develop with LoongArch.

    Tostino 13 hours

    I hope our complacent companies get a shot of competition.

    IshKebab 12 hours

    I think 10 years is a more realistic estimate. Probably first in servers and Android phones.

    ThatMedicIsASpy 10 hours

    They are everywhere already in microcontrollers like ESP32.

    IshKebab 1 hours

    Yeah but op was talking about directly using a RISC-V computer. The embedded RISC-V CPUs are effectively black boxes.

  • stuxnet79 13 hours

    Not my area of expertise but what exactly is the difference between RISC-V and Power PC? Didn't Power-PC get a good run in the 90s and 2000s? Just wondering why there's renewed interest in RISC-like architectures when industry already had a good exploration of that area.

    LeFantome 6 hours

    There are many more RISC chips than not. Apple Silicon is RISC. All ARM is RISC (eg. Raspberry Pi).

    Joker_vD 12 hours

    Ah, PowerPC. For a RISC processor it surely had a lot of instructions, most of them quite peculiar. But hey, it had fixed-length instruction encoding and couldn't address memory in instructions other than "explicit memory load/store", so it was RISC, right?

    bobmcnamara 9 hours

    Also load/store backwards, but no reverse the register instructions

    mikestorrent 12 hours

    x86_64 machines are RISC under the hood and have been for ages, I believe; microcode is translating your x64 instructions to risc instructions that run on the real CPU, or something akin to that. RISC never died, CISC did, but is still presented as the front-facing ISA because of compatibility.

    samsartor 11 hours

    I think that this is something of a misunderstanding. There isn't a litteral RISC processor inside the x86 processor with a tiny little compiler sitting in the middle. Its more that the out-of-order execution model breaks up instructions into μops so that the μops can separately queue at the core's dozens of ALUs, multiple load/store units, virtual->physical address translation units, etc. The units all work together in parallel to chug through the incoming instructions. High-performance RISC-V processors do exactly the same thing, despite already being "RISC".

    wk_end 11 hours

    That's a common factoid that's bandied about but it's not really accurate, or at least overstated.

    To start, modern x86 chips are more hard-wired than you might think; certain very complex operations are microcoded, but the bulk of common instructions aren't (they decode to single micro-ops), including ones that are quite CISC-y.

    Micro-ops also aren't really "RISC" instructions that look anything like most typical RISC ISAs. The exact structure of the microcode is secret, but for an example, the Pentium Pro uses 118-bit micro-ops when most contemporary RISCs were fixed at 32. Most microcoded CPUs, anyway, have microcodes that are in some sense simpler than the user-facing ISA but also far lower-level and more tied to the microarchitecture.

    But I think most importantly, this idea itself - that a microcoded CISC chip isn't truly CISC, but just RISC in disguise - is kind of confused, or even backwards. We've had microcoded CPUs since the 50s; the idea predates RISC. All the classic CISC examples (8086, 68000, VAX-11) are microcoded. The key idea behind RISC, arguably, was just to get rid of the friendly user-facing ISA layer and just expose the microarchitecture, since you didn't need to be friendly if the compiler could deal with ugliness - this then turned out to be a bad idea (e.g. branch delay slots) that was backtracked on, and you could argue instead that RISC chips have thus actually become more CISC-y! A chip with a CISC ISA and a simpler microcode underneath isn't secretly a RISC chip...it's just a CISC chip. The definition of a CISC chip is to have a CISC layer on top, regardless of the implementation underneath; the definition of a RISC chip is to not have a CISC layer on top.

    topspin 7 hours

    That's an excellent rebuttal to this common factoid.

    Recently I encountered a view that has me thinking. They characterized the PIO "ISA" in the RPi MCU as CISC. I wonder what you think of that.

    The instructions are indeed complex, having side effects, implied branches and other features that appear to defy the intent of RISC. And yet they're all single cycle, uniform in size and few in number, likely avoiding any microcode, and certainly any pipelining and other complex evaluation.

    If it is CISC, then I believe it is a small triumph of CISC. It's also possible that even characterizing it as and ISA at all is folly, in which case the point is moot.

    mikestorrent 10 hours

    Thanks for the detail, that's very clarifying

    MobiusHorizons 4 hours

    I think you are conflating microcode with micro-ops. The distinction into the fundamental workings of the CPU is very important. Microcode is an alternative to a completely hard coded instruction decoder. It allows tweaking the behavior in the rest of the CPU for a given instruction without re-making the chip. Micro-ops are a way to break complex instructions into multiple independently executing instructions and in the case of x86 I think comparing them to RISC is completely apt.

    The way I understand it, back in the day when RISC vs CISC battle started, CPUs were being pipelined for performance, but the complexity of the CISC instructions most CPUs had at the time directly impacted how fast that pipeline could be made. The RISC innovation was changing the ISA by breaking complex instructions with sources and destinations in memory to be sequences of simpler loads and stores and adding a lot more registers to hold the temporary values for computation. RISC allowed shorter pipelines (lower cost of branches or other pipeline flushes) that could also run at higher frequencies because of the relative simplicity.

    What Intel did went much further than just microcode. They broke up the loads and stores into micro-ops using hidden registers to store the intermediates. This allowed them to profit from the innovations that RISC represented without changing the user facing ISA. But internal load store architecture is what people typically mean by the RISC hiding inside x86 (although I will admit most of them don't understand the nuance). Of course Intel also added Out of Order execution to the mix so the CPU is no longer a fixed length pipeline but more like a series of queues waiting for their inputs to be ready.

    These days high performance RISC architectures contain all the same architectural elements as x86 CPUs (including micro-ops and extra registers) and the primary difference is the instruction decoding. I believe AMD even designed (but never released) an ARM cpu [1] that put a RISC instruction decoder in front of what I believe was the zen 1 backend.

    [1]: https://en.wikipedia.org/wiki/AMD_K12

    invalidator 11 hours

    The interest is BECAUSE it's well explored territory. The concept is proven and works fine.

    On the low end where RISC-V currently lives, simplicity is a virtue.

    On the high end, RISC isn't inherently bad; it just couldn't keep up on with the massive R&D investment on the x86 side. It can go fast if you sink some money into it like Apple, Qualcomm, etc have done with ARM.

    LeFantome 6 hours

    ARM is RISC and dominates x86 in most markets.

    In 2026, RISC-V is not what I would call “low end”. Look up the P870-D, or Ascalon, it C950.

    Do you think Apple spends more money than Intel on chip design?

    pjmlp 2 hours

    ARM is mostly RISC, and doesn't dominate x86 in desktop and servers.

    Apple business is vertical integration, they have zero presence in the chip market.

    adgjlsfhk1 4 hours

    > Do you think Apple spends more money than Intel on chip design?

    Absolutely. Apple's R&D budget for 2025 was 34 Billion to Intel's ~18 Billion (and the majority of Intel's R&D budget goes to architecture, while for Apple, that is all TSMC R&D and Apple pays TSMC another ~$20 billion a year, of which, something like 8 billion is probably TSMC R&D that goes into apple's chips).

    Sure not all of Apple's 34B is CPU R&D, but on a like-for-like basis, Apple probably has at least 50% more chip design budget (and they only make ~10-20 different chips a year compared to Intel who make ~100-200)

    Chyzwar 13 hours

    It is Chinese companies looking for ARM alternative that push this otherwise mediocre ISA.

    It is possible that ARM based CPUs will start eating x86 market slowly. See snapdragon X2 and upcoming Nvidia CPU. Maybe in 10 years new computers will be ARM based and a lot of IoT will run on risc-5.

    bobmcnamara 9 hours

    Really? Didn't China pirate the entire ARM China company and start spamming cores like Star1

    LeFantome 6 hours

    SiFive, Tenstorrent, and other big RISC-V firms are not Chinese.

    adgjlsfhk1 4 hours

    you realize that every WD hdd and every nvidia gpu from the past couple years has a Risc-v in it?

    aappleby 13 hours

    Why "mediocre"? I've written production assembly language for a half-dozen different processor architectures and RISC-V is my favorite by far.

    mikestorrent 12 hours

    You should write an article on that explaining why you like it to the common man

    sph 2 hours

    Silly opinion that has no relevance to building competitive CPUs, but I like that RISC-V is modular and you can pick and choose which extensions to adopt.

    Makes writing a simulator so easy (just have to focus on RV32I to get started), and also makes RISC-V a great bytecode alternative for a homegrown register-based virtual machine: chances are RV32I covers all the operations you will need on any Turing-complete VM. No need to reinvent the wheel. In a weekend I implemented all of RV32IM, passing all the official tests, and now I have target my VM with any major compiler (GCC, Rust) with no effort.

    If there is any architecture that scales linearly from the most minimal of low-energy cores to advanced desktop hardware is RISC-V.

    Disclaimer: I don't know much about ARM, but 1) it isn't as open and 2) it's been around enough to have accumulated as much historical cruft as x86.

    topspin 13 hours

    "It is Chinese companies looking for ARM alternative"

    The V in RISC-V represents iteration of the ISA, over the last 46 years, most of which occurred in the US, mainly at Berkeley.

    charcircuit 8 hours

    Your comment is appealing to fallacies such as it being old so it's good or that it was made by a prestigious university. It's not like those early iterations were commercially produced and they learned off of real world usage. For the people who criticize the ISA, saying that it is old will not change their mind.

    topspin 7 hours

    Maybe. People are free to partake in whatever cognitive misadventures they wish. I merely cite the incontrovertible fact that Berkeley RISC predates essentially all of the modern economic history of China, and also the rise of ARM. It came from academe in the US, for better or worse, whether it's crap or the finest ISA ever, and for whatever purpose these US academics had or or have. That is all anyone can truthfully say about its pedigree. The rest is just bullshit from the internet.

    avadodin 8 hours

    They push it to save a couple nickels per core on the ARM licenses, not out of nationalistic fervor.

    And it is the Chinese doing it because virtually 100% of all chips are made in China and Taiwan.

    MobiusHorizons 6 hours

    That's not really how it works. There are only a few companies on the planet that are licensed to create their own cores that can run ARM instructions. This is an artificial constraint, though and at present China is (as far as I know) cut off from those licenses. Everyone else that makes ARM chips is taking the core design directly from ARM integrating it with other pieces (called IP) like IO controllers, power management, GPU and accelerators like NPUs to make a system on a chip. But with RISC-V lots of Chinese companies have been making their own core designs, that leads to flexibility with design that is not generally available (and certainly not cost effective) on ARM.

  • ddtaylor 12 hours

    I stopped listening to what Canonical says. They often get involved in things and disturb the ecosystem then abandon stuff or dig a "not invented here" hole.

    Unity, Bazaar, Mir, Upstart, Snap, etc.

    All of them had existing well established projects they attempted to uproot for no purpose other than Canonical wanted more control but they can't actually operate or maintain that control.

    loloquwowndueo 10 hours

    The project bzr was trying to uproot may not be the one you’re thinking of. First release of Bzr predates git by about a month.

    ddtaylor 6 hours

    Correct, and I used bzr quite a bit during that time. It was interesting in some ways, but Canonical pushed it for many years after git was obviously the better choice.

    Even to this day there is a complex and archaic process of using Launchpad where git is tacked on because they stuck with Bazaar for so long.

    unmole 4 hours

    I miss Ubuntu One, their Dropbox alternative which came with a wee integrated Linux client. IIRC, their free tier was also more generous in comparison.

    justinclift 11 hours

    Also LXD → Incus: https://linuxcontainers.org/lxd/

    duskwuff 11 hours

    Or ansible/chef/etc -> Juju. There's a lot of NIH to pick from at Canonical.

    goodpoint 15 minutes

    Or Debian -> Ubuntu

    sharts 10 hours

    It’s canonically fucked

    Redoubts 10 hours

    In a way it's really sad how many swings and misses Canonical has taken in its history.

    ddtaylor 8 minutes

    I'm fine with a company getting things wrong from time to time. What I don't like is the attitude where they walk into the room and start moving the furniture around while smugly dismissing or ignoring talented and established people. Then after a bit of milling around they just give up and leave the room and everyone has to clean up the mess.

    maxloh 10 hours

    Snap is definitely not abandoned.

    17 minutes

    ur-whale 2 hours

    > Snap is definitely not abandoned.

    You seem to be say it like it's a good thing?

    Can't wait for that thing to explode and die.

    esperent 10 hours

    Sadly

    ddtaylor 19 minutes

    Snap is a terrible. It's the reason why I stopped using Debian based distros after decades for desktop usage.

    Lying to users and turning apt install commands into shims for a barely functional replacement was disrespectful. Flatpak was and still is better, but even then if I say I want a system package you give me a system package. If you have infrastructural reasons why you cannot continue to provide that package then remove it, Debian based systems have many ways to provide such things.

    Canonical did it because they wanted to boost Snap usage and if failed while sending a clear message they don't respect their user base.

    lukaslalinsky 1 hours

    I was honestly wishing Ubuntu would keep upstart alive. I preferred it as init system.

    ddtaylor 5 minutes

    That is half the problem. They often introduce neat ideas, but then fail or refuse to integrate them with he rest of the FOSS ecosystem. Then anyone who subscribes to their experiment is left cleaning up the mess and trying to migrate the features or ideas they like to the remaining projects that should have been extended in the first place.

    popcornricecake 6 hours

    Ubuntu Touch... I was so excited about it that I bought one of the phones with it preloaded. I even used it as my sole daily driver for months, until I learned that I was not receiving all calls made to me. Even after that I kept hoping it would keep developing so that I could pick it up again one day. But then Canonical abandoned it instead. That's when they became as good as dead to me.

    ddtaylor 6 hours

    Sadly, KDE and Gnome each spent a lot of time on the same things. Plasma Mobile has ate more time that could have went into making Plasma a better desktop.

    kombine 1 hours

    That's a strange argument. Open source software including Plasma Mobile is developed by volunteers who choose to spend their time on a given project. I am quite happy with the pace of Plasma Desktop and the progress made in the past 3 years on its 6th iteration.

    unethical_ban 12 hours

    Not sure on the timelines, but snap, upstart and Mir were all attempts at evolving Linux ecosystem that lost to RedHat-backed systems. Unity was legit abandoned, and bazaar... Not sure what they were trying to solve there with git and forges already existing.

    loloquwowndueo 10 hours

    > Not sure what they were trying to solve there with git and forges already existing.

    What?

    Bzr predates git (by a few days but still). Launchpad predated GitHub by a lot. canonical just played those cards horribly and lost.

    ddtaylor 3 minutes

    I still maintain some Launchpad packages and recipes. It's an insanely archaic system and borderline non-functional. I wouldn't wish it upon most.

    ddtaylor 10 hours

    Wayland was created in 2008. Mir was created in 2013.

    Bazaar and Git were created around the exact same time.

    Unity was abandoned after a failed attempt to circumvent Gnome 3. I was actually involved with the development of Compiz and they hired Sam to work on Unity, as he was one of the masterminds behind Compiz, but again they just didn't have the vision or execution to make it work.

    pjmlp 2 hours

    Unity was great, after it was abandoned I tried yet again GNOME 3, me that in the past have collaborated with Gtkmm, ended up moving into XFCE, and nowadays I am fully on macOS/Windows anyway.

    If I ever go back to GNU/Linux full time, GNOME certainly won't be it.

    ddtaylor 11 minutes

    Things improved a lot with Gnome over the years, but as a fellow Gnome 2 user the initial release of 3 and the following years were a real kick in the teeth.

    Things have improved, but the overall Gnome Foundation attitude hasn't improved. They are still very stubborn and remove basic features. This seemed to start when they did their infamous "focus groups" where they claim users can't understand basic things.

    I get the desire to provide a cohesive experience, but I think you can do that while also giving people control.

    KDE is shaping up to be much better and it's likely because Valve is providing commercial support and exposing it to a larger audience.

    Cosmic is the new kid backed by system76 and its pretty nice too and may rescue Gnome in some ways in due time.

    foresto 10 hours

    > bazaar... Not sure what they were trying to solve there with git and forges already existing.

    You are mistaken here. Bazaar, Mercurial, and Git appeared at about the same time, and I think Bazaar was released first.

    IIRC, Bazaar tried to distinguish itself by handling renames better than other version control systems. In practice, this turned out not to be very important to most people.

    (Tangent: It wasn't clear at the time whether Mercurial or Git was the better pick. Their internal design was very similar. Mercurial offered a more pleasant user interface, superior cross-platform support, and a third advantage that I'm forgetting at the moment. Git had unbeatable author recognition. Eventually, Git's improved Windows support and the arrival of GitHub sealed its victory in the popularity contest. But all of that came to pass well after Bazaar was released.)

    omcnoe 4 hours

    Lightweight branch model of git mapped so much better to the way that actual development processes of medium to large projects really work(ed).

    Named branches vs bookmarks in hg just means bike shedding about branching strategy. Bookmarks ultimately work more like lightweight git style branches, but they came later, and originally couldn't even be shared (literally just local bookmarks). Named branches on the other hand permanently accumulate as part of the repository history.

    Git came out with 1 cohesive branch design from day 1.

    nottorp 2 hours

    I work on a mercurial hosted project right now. What ticks me off is all those unnamed heads you need to handle every time you pull other people's changes. Yes they're more flexible. Most of the time that just means extra operations for no good reason.

    jurip 1 hours

    Yeah, agreed. I liked the idea of Mercurial branches better than git's — in principle I prefer more rather than less metadata in history — but they genuinely had a scaling problem. I can't recall the numbers, this being more than a decade ago, but I tested with a realistic number of branches for a team of developers using short-lived branches for a while and you could easily see Mercurial slowing down.

    Back when I was testing bookmarks were available, but Bitbucket was pretty much the only forge that supported Mercurial and their tooling didn't support bookmarks, so that made them a non-starter for many users.

    usrnm 2 hours

    That is very different from my experience with git. I know that the kernel uses branches a lot, but that's probably because of git's history with the project. At every company I worked git is used exactly the same way as CVS or SVN was used many years ago: you make some local changes, you push these local changes to the central store, you forget about it. Branches make local switching between tasks easier, but apart from that nobody cares about branches and they're definitely not treated as an important part of the repo. In fact, they're usually deleted immediately after the change is merged.

    omcnoe 2 hours

    I think you have it swapped around. This is exactly the kind of workflow that git provided better support for - lightweight branches, not integral part of master history, deleted after merge.