• Hacker News
  • new|
  • comments|
  • show|
  • ask|
  • jobs|
  • drob518 7 hours

    Well, that’s a blast from the past.

  • mplanchard 11 hours

    I hand-rolled an atom feed for my statically generated blog. It’s a reasonable, easy format to work with.

  • echelon 6 hours

    IIRC, Aaron Swartz was one of the contributors to the format. RIP.

  • tkcranny 11 hours

    I’m not clear on the difference between atom and RSS. Atom seemed to be the better spec, but for my Astro blog I ended up sticking to the built in `rss` helper it ships with.

    JimDabell 24 minutes

    In the beginning was RSS 0.x. It was originally intended to be based on RDF. Compromises were made and it ended up dropping the RDF. The spec. wasn’t very good and had several ambiguities.

    Some people forged ahead with a cleaned up RDF-based version and called it RSS 1.0, while other people went ahead with the ambiguities but without RDF and called it RSS 2.0. The person publishing RSS 2.0 considered it finished and refused to update it. There was drama.

    A bunch of people decided that there was too much to clean up from within that mess and started a new format, Atom. This ended up being a much better spec. with an official RFC, but at this point everybody was calling any type of feed “RSS”, even if it was Atom.

    If you have the choice, you should pick Atom.

  • eterevsky 4 hours

    It was an alternative to RSS from 20 years ago that didn't catch on.

    ravenstine 2 hours

    I thought it did in fact catch on but most people still referred to it as "RSS".

    johnny_reilly 2 hours

    Docusaurus supports it out of the box as well https://docusaurus.io/blog/atom.xml

    riffraff 3 hours

    I think it caught on well enough, platforms such as Wordpress still support it out of the box (I just checked my blog, it works).

    I liked Atom's clean design but it felt it was mostly pushed by Google (I may be misremembering) and in the end the syndicated web faded into obscurity anyway.

    brabel 3 hours

    IIRC RSS 2.0 included most of what Atom has, no?

    talideon 17 minutes

    Not really, and it's still more error-prone than Atom.

    There's really no good reason to use anything other than Atom.

  • rippeltippel 6 hours

    At this point, developers have named so many projects "Atom" that there are officially more Atoms in the world than there are atoms in the universe.

    Gys 1 hours

    https://datatracker.ietf.org/doc/html/rfc4287

    Dec 2005

    I think at that time it was still ok?

    echelon 6 hours

    This one is (was) pretty important.

    The hyperscalers stopped that timeline from winning, though.

    riffraff 3 hours

    How is this the hyperscalers fault?

    YouTube had atom feeds and I don't think Amazon and Microsoft have relevant syndication.

    Meta is surely responsible but that's it, imo.

    erk__ 3 hours

    YouTube still does

        <feed xmlns:yt="http://www.youtube.com/xml/schemas/2015" xmlns:media="http://search.yahoo.com/mrss/" xmlns="http://www.w3.org/2005/Atom">
    
    I don't think they are linked to anywhere but the url is http://www.youtube.com/feeds/videos.xml?channel_id=<channel_id>

    echelon 3 hours

    Google on several occasions took moves to make the web less semantic.

    They dumped microformats and standards in favor of soupy error tolerant formats that benefitted their search engine and made it harder for other efforts to make information shareable and accessible.

    They wanted it to be easy to get information in, but for you to have to go through them to get information out.

  • intrasight 11 hours

    First iteration of Google's APIs were atom. I do miss XML.

    abustamam 8 hours

    One of the API providers I use at work returns responses in XML and we use an XML parser to parse it to JSON and even then it's not perfect.

    What do you like about XML? I feel like I'm missing something.

    theshrike79 4 hours

    XLM had DTDs and Schemas 20 years ago.

    JSON is still figuring it out.

    deaddodo 5 hours

    The main benefit of XML over JSON is that it is structured, and can be associated with Schema's for built in validation.

    Obviously, that's only a benefit if you care about and utilize those features; most teams doing JSON integrations will just build those into the consumer in lieu of them being provided by the transport. But it is something that some people (especially larger enterprise organizations) value.

    dolmen 4 hours

    JSON is structured (not plain text to be analyzed by an IA). JSON has JSON Schema.

    In addition, JSON is easier to parse and to map to common data structures of programming languages.

    refulgentis 7 hours

    I don't reach for it often but I've been around the block a bit, CC processors in the iPad point of sale I built circa 2010 used it and it seemed a bit off/unnecessary.

    In retrospect, its useful for creating islands of sanity/enforcement in a codebase. Lightweight way to give type annotations across organizational boundaries.

    > we use an XML parser to parse it to JSON and even then it's not perfect

    I can't quite picture this: how does one parse XML to JSON? I assume there's code that's parsing XML and returning a JSON object? What would make this not perfect, other than a poor implementation of the translator? Would them using JSON help? If JSON is a less expressive format than JSON, is it possible to 100% translate their XML to JSON?

    abustamam 6 hours

    > useful for creating islands of sanity/enforcement in a codebase

    Thanks for the insight! Is this what JSDoc/Swagger is now used for?

    > I can't quite picture this: how does one parse XML to JSON?

    I'm not sure actually. I haven't personally seen the code, I just hear my coworkers always lambasting that API provider for their usage of XML. Maybe it's just their lack of documentation that sucks, but it's become a running joke whenever we get a new partner that the team integrating it jokes that their API is XML.

  • perrohunter 11 hours

    what is old is new again?

    hnlmorg 11 hours

    No, this is just old.

    Pity though. RSS / Atom was a fantastic concept and it’s a real pity big tech killed them off.

    bawolff 8 hours

    Meh. Big tech didnt kill it off, it was already dead at that point. Sometimes things just arent popular no matter how much we might want it to be.

    lolive 7 hours

    Google Reader was uber popular at a time, then Google decided that syndication of articles, with comments, had to be an exclusive feature of their Facebook-esque Google+.

    pletnes 4 hours

    Lots of sites publish outages, incidents, downtime over RSS/atom. Works great for monitoring, post them into slack with a bot and you can start a discussion thread about that incident where you first hear about it.

    rambambram 10 hours

    Nothing is killed. It still exists, it's an open protocol after all. And I choose to use it, it's pretty fun to calmly follow around 2000 feeds from - mostly - blogs from HN. And cars... I need my car blogs.

    geodel 10 hours

    Agreed. That nowadays people or even big companies find it outside their core competency to host their blog, have atom/RSS feeds is not because big tech killing it.

    ushimitsudoki 6 hours

    [dead]

    holistio 6 hours

    Is there any platform for sharing what feeds we follow? Would love to discover some new blogs.

    manuelmoreale 6 hours

    Closest thing I can think of is this one: https://feedland.org

    Or you create a blog for yourself and you make a blogroll.

    As for discovering new blogs, couple of options but there are more out there: https://ooh.directory, https://blogroll.org/

    rambambram 2 hours

    Well, my guess is that OPML is underrated. And I understand that, because it's so different from the social media that we are used to. On my homepage (link in bio) you can find all the feeds that I follow, available as an OPML file. It might be of interest to you, it might not (probably a lot of blogs you know from here, at least half of my 2000 feeds).

    One 'dream' of me is to have OPML be the discovery-glue between all kinds of individual personal websites and blogs. But this requires critical mass to have enough to discover and explore, and it needs some fun/interesting software way to do that.

    darreninthenet 2 hours

    How do you curate and keep on top of so many feeds? I have ~10 on my RSS reader and I sometimes have trouble keeping up if I have a couple of busy days

    rambambram 2 hours

    Good question! I don't follow all the news and updates from each and every feed. At the bottom of this page you can see the UI: https://www.heyhomepage.com/?link=32&title=Screenshots

    Basically, I get to see the latest post from a random feed. Nothing else. No lists of unread new posts from all the feeds. If I like the title and short summary, I click through to the website or blog itself where I can read the whole thing. There's no FOMO this way, or an information overload. Just one post a time.

    Because the whole list of feeds is curated by myself, I know that everything is at least a little interesting. I even made a category with Youtube channels that I like, so I can skip their annoying recommended videos algo.

    Next to this basic functionality, I made what I call 'Newspapers'. These are certain topics with a bunch of selected feeds attached, they get checked automatically in the background. When the Newspaper has enough articles, I see a new Newspaper appear. Otherwise it might take months before a feed is shown in the random selection.