There are two conclusions one can draw from this: either the idle power consumption of laptops is so low that something as trivial as updating the clock display on an otherwise idle system[1] is a significant amount, or their code is so shitty that it's taking an order of magnitude or more power than it should. Given this is Microsoft, I'm inclined to believe the latter, or that it was "deliberately" implemented in an inefficient way to "prove" their argument. It'd be trivial to write a tiny Win32 app that just has an incrementing seconds counter and use that to distinguish the latter two cases.
[1] The caveat is that the majority of the time the system will not be idle but doing something else possibly even more energy-intensive.
42 minutes of battery life lost on 321 minutes of battery life is insane.
> Some Reddit users on the same thread also pointed out that while the system is already doing plenty in the background, even small updates like this might prevent deeper power-saving states.
This is undoubtedly the answer, and I suspect that if any actual effort were made by Microsoft, the problem might be eliminated entirely. Maybe.
Most likely, the update is implemented calling a standard stack of system calls that are completely benign in a normal application, which is already limiting power savings in various ways. But when run by itself, the call stack is triggering a bunch of stuff that ends up using a bit more power.
The big question is: Can this actually be optimized with some dedicated programming time? Or is the display/task bar/scheduling such a convoluted mess in Windows that updating the time every second without causing a bunch of other stuff to wake up is impossible without a complete rewrite.
Tldr, yes.
Who cares about this, when a geforce 5090 is using so much power that it most often is turning power cables into a furnace.
I was wondering the same when configuring Polybar w/ i3 to show seconds on my Linux system. Even if it's marginal, I think I'll disable it.
13% less battery time is pretty wild just from updating the screen once per second but interesting to understand why.
Incompetence
Oh come on, the seconds are entirely predictable, run from 00 to 59, so basically storing 10 frames ("0" to "9") in the font and foreground/background colors, the location on the screen into a small reserved buffer in whatever piece of code is responsible for the final screen frame handover, and this could be updated with minimal overhead and without a context switch,...
Yes, it would require a small API addition to the desktop server (wayland, X11, ...) to "register"/transfer/update those 10 frames, their locations ... whenever the user initializes or changes the fonts, font size, ... the context switch can be totally eliminated.
It’s a feature to exercise the whole display stack at every update, to make sure it works, for example when running remote desktop (or any other similar protocol/stack like citrix), where you can observe if the network still can send packets every second. Otherwise you need to wait a minute before noticing connectivity issues. Both rdp and citrix are extremely fragile and a clock that updates every second is a very good visual indicator for monitoring the connection. https://xkcd.com/1172
That's quite surprising. I wouldn't have imagined Windows (or any other "desktop OS") to go to great lengths to optimize for static screen content in the way that e.g. smartphones or wearables do, which as I understand have dedicated hardware optimized for displaying a fully static screen while powering down large parts of the display pipeline.
Windows runs on laptops and tablets and such. At this point they probably do a fair bit of that sort of thing.
The decision to now show seconds dates back to Windows 95. Back then the motivation was not power saving, but rather to allow the code related to the clock and text rendering to be swapped out to disk on a 386 with 4MB RAM... Raymond Chen: https://devblogs.microsoft.com/oldnewthing/20031010-00/?p=42...
Desktop OSs idle most of the time, and the comparison is with respect to an idle desktop. Forcing context switches and propagating updates through the GUI stack every second isn’t free in that situation, it means that at least one CPU core can’t stay in a lower-power state. In contrast, you probably won’t see much of difference in battery life for the seconds display when simultaneously watching video or running computational tasks.
Raymond Chen recently wrote about the history of seconds on the taskbar: https://devblogs.microsoft.com/oldnewthing/20250421-00/?p=11...
After reading only the title of the HN story I automatically assumed it was probably Raymond. Always a pleasure reading his posts.
I don't get it. If the system is busy, it will update the screen less then a second anyway, if it is not it will go to sleep after less then a minute. Does Windows not turn off the display when unused, and then goes to sleep after a while?
This is not a test of normal usage. This is a scenario of sleep disabled and an idle system.
Does this happen on linux? Polybar with i3 has an option to show seconds by clicking the date and time
It happens on GNOME at the very least, and I would expect every modern platform is the same way.
It certainly does. There is for example a measurable energy cost for having a blinking cursor in a terminal, and there have been huge flame wars about efforts to move to non-blinking cursors.
The compromise for GNOME Terminal is that the cursor will stop blinking after a terminal has been idle for ten seconds.
Why not similarly disable seconds display after some specifiable time has elapsed since the last key or mouse event? The decimal digits could be replaced with a greyed-out ":--".
It's going to take power, no matter the operating system. What matter is how much power it takes. On most desktop environments and widgets, it's probably negligible.
Wonder if it make sense to architect computers with a small sidecar CPU that is not as powerful but it also runs at ultra low power ... so tasks like these can be delegated to it while allowing the main CPU to enter low power state when nothing else is placing demand on it.
The extra CPU usage isn't that bad. Updating a single number isn't that hard.
The bigger problem is waking up the GPU and all the communication between components, which is why the computer with integrated graphics takes a smaller hit than the one with dedicated graphics. And why the ARM laptop did even better, because they were optimised for this usecase.
I would have been happy with no seconds in the tray, but showing the seconds if you click on the clock - technology that existed a decade ago in Windows 10, but is obviously technologically impossible for hundreds of PhD holding software engineers at the richest company in the world to figure out in 2025.
>â€PhD holding software engineers at the richest company in the world to figure out in 2025.â€
Let’s be honest, implementing this would be up to a bunch of offshore contractors because corporate can’t bring itself to pay software engineers to implement this feature thoughtfully and comprehensively.
All the smart engineers must have left to do AI stuff. The new windows 11 "power savings" settings menu has this gamification angle. It tells you that you've enabled "5 of 7 power saving settings" or whatever, in a way that implies the goal is to get all 7. It triggers my OCD every time I see the screen and it implies that quest is unfinished.
Performative environmentalism, standard operating procedure for every big company. Shift the blame onto the consumer, make them feel guilty because they set their screen timeout to to more than 5 minutes. Gamification is a great tool for making people feel pressured and guilty but with plausible deniability for the company.
Meanwhile, build database centers at incredible scale to run AI and force it into those same consumers in every way possible, but never tell them how much power that wastes.
Laudauer's principle (https://en.wikipedia.org/wiki/Landauer%27s_principle) tells us that you can't delete a bit without releasing some heat. As the new time digits come in and overwrite the old ones (in the framebuffer, in the LCD, likely other places too) this would occur as the previous digits were deleted. So the only case where showing the time would not take more power is one where other things are not held equal, e.g. some quirk of the software ends up doing more work to ignore the time than to show it (I'd call such a thing a bug).
This effect is likely vanishlingly small, definitely overshadowed by engineering considerations like the voltage used when walking pixels through changes and such. But still, it's a physics nudge towards "yes".
Landauer’s principle is an information-theoretic result about the fundamental cost of computations. With CMOS, every logic gate has multiple transistors, some of which just get charged and dumped to ground with every state transition anyway.
It is like worry about Carnot’s limit… for a motor boat.
What if it's an OLED screen and the clock is in a dark font on a light background, so adding seconds means less light is emitted? (Light mode only)
Yeah good point. With large enough pixels and pathological color choices you could almost certainly derive the opposite result.
It would be interesting to test it over a remote desktop session where the screen on the device under test is off. That would eliminate a lot of factors related to the display. Presumably you'd see that the network traffic is either larger to begin with, or doesn't compress quite as well, giving you another reason to say "yes, but what if..."
When the start menu is a react native app that spikes up the cpu needing billions of flops just to do that, I doubt this number will make a difference.
Agreed, the dominant effect would likely be which ads are being served to the start menu, or which user data is being exfiltrated to Microsoft at the time.
Wasn't that debunked already?
Is that true? Was Active Desktop just a preview of what's to come?
> Test Type: Idle desktop only (no applications or media playback, unless otherwise stated)
It's weird they didn't also include a simple web browser test that navigates a set of web links and scrolls the window occasionally. Just something very light at least, doesn't even have to be heavy like video playback.
Yeah this is not meaningful due to the unrealistic workload. Sad thing is, I bet a web browser test would still show the difference, as long as a page is kept static on the screen for more than a few seconds before moving on.
Power consumption is incredibly difficult to benchmark in a meaningful way because it is extremely dependent on all the devices in the system, all the software running, and most power optimizations are workload dependent. Tons of time went into this in the windows fundamentals team at Microsoft.
I agree. My guess is the way this may be implemented could keep the system from entering a lower energy state in some way or another, something which would be far less noticeable during normal usage.
Not that weird. Idle desktop isolates the effects of the change to get a worst case scenario. Would be interesting to see a light activity test too though - see if you still get a noticeable difference.
> We’re currently running the same test again on all three laptops to account for variance, but this time with a video playing to simulate a more active usage scenario. Once those results are in, we’ll update the relevant section with the new data.
What? "We are doing a second test to account for variance, but also changing the test setup" that doesn't make any sense.
They account for variance between different laptops. The test is changing the same way for all laptops. It makes sense.
I hope so, because I actively want seconds absent from the system tray. Attention is a scarce resource; the fewer things on the screen constantly changing and thereby consuming my attention, the better. If saving power means we remain free from that anti-feature, great.
Ideally the clock display should be customisable to display whatever level of precision you want; I believe at least one Linux application lets you specify it via a strftime() format string.
I, for one, love it for casual and incidental benchmarking. Of everything - not just a process I run, but also how long between bird chirps outside my window. But I also find it very easy to ignore, too. Glad it’s optional.
I just say “one one thousand two one thousand…†under my breath.
Does nobody care about just being able to tell the time accurately? 59 seconds makes a big difference for joining online meetings and things.
No and No, it doesn't.
Approximately zero people in the world care if you join a meeting at 1:00, or 1:01. It's good to aim to be punctual, but if you're off by a minute there is no consequence.
That is definitely not true. It's very dependent on the culture, the company, the specific group.
I've met managers who literally lock the conference room door when it hits :00.
That's a little crazy in my view, but there are definitely places where it's the norm.
There are basically two ways of managing expectations around meeting times. The first is that it's acceptable for meetings to run late, so it's normal and tolerated for people to be late to their next meeting, and meetings often start something like 5 minutes late, and you try to make sure nothing really important gets discussed until 10 minutes in. The other is that it's unacceptable for meetings to start late, so people always leave the previous meeting early to make sure they have time for bathroom, emergency emails, etc. In which case important participants wind up leaving before a decision gets made, which is a whole problem of its own.
I'm curious how you came to such a universally sweeping conclusion. At any rate, it's incorrect as I have personally observed counterexamples in my professional career.
Beware of concentrated benefit and diffuse cost. Sure, let a seconds clock be available to call up the 0.1% of the time when you want it. But it shouldn't be in the system tray presenting a small but ongoing attention drain the other 99.9% of the time.
As a horologist, I want seconds. It annoys me not to have it. I wouldn't care if it isn't the default, as long as I can set it, similarly to how I currently have to set 24-hour time separately on all my machines because the US locale defaults to 12-hour time. That's fine, and understandable. But I'm constantly annoyed, for instance, by Apple's long running absolute refusal to allow the iOS clock to display seconds.
The attention drain is sadly pretty much unmeasurable properly, as it's a subjective thing.
I'm one of those freaks who have this on and I honestly like it a lot. It gives me a feeling of certainty, grounding, and precision.
Primary driver for turning it on was their redesign of the clock flyout to be, uhh, nonexistent with Windows 11, which I'd previously use on demand for seconds information. I was also worried about this being a nonsolution and a distraction initially, but it ended up being fine.
I hate these kind of “saves power†things in windows settings. The OS itself pings home so often, sends network request for everything you do, shows ads on the login screen, makes screenshots (for Recall), Edge sends contents from web forms for “AIâ€. And now it is my responsibility to disable showing seconds in the taskbar??? If microsoft really wants to be green, windows shouldn’t do all these wasteful things!
Wonder how much an OS that focuses on battery life can extend working time on a laptop. Would be a killer marketing point I think.
This setting is disabled by default.
To be fair, does it do all those things every second? https://learn.microsoft.com/en-us/windows-hardware/drivers/k...
(For the record, I abhor Windows 11)
And forcefully overrides personal preferences to NOT ahouls any Windows Spotlight images and trivia on lock screen, and news and recommended content on Edge homescreen
> Edge sends contents from web forms for “AIâ€
That reminds me of Chrom[e|ium]'s insanely bad form suggest/autofill logic: The browser creates some sort of fuzzy hash/fingerprint of the forms you visit, and uses that with some Google black box to "crowdsource" what kinds of field-data to suggest... even when both the user and the web-designer try to stop it.
For example, imagine you're editing a list of Customers, and Chrome keeps trying to trick you into entering your own "first name" and "last name" whenever you add or edit an entry. For a while developers could stop that with autocomplete="off" and then Chromium deliberately put in code to ignore it.
I'm not sure how much of a privacy leak those form-fingerprints are, but they are presumptively shady when the developers ignore countless detailed complaints over many years in order to keep the behavior.
> And now it is my responsibility to disable showing seconds in the taskbar???
It is not. This "feature" is disabled by default.
Google "manufactured outrage".
Similar vibe with telling people to not flush two times in toilet while companies are pouring literal poison into oceans/seas.
Also airlines asking for extra money to offset emissions, just absolute insanity
While those same airlines fly empty planes just to avoid losing airport slots.
And Windows Update burns through an ungodly amount of CPU.
And it is only getting worse. I would consider windows update on its own enough reason for not using this shit os at all. Be aware! https://youtu.be/4RQ6pek3JoM
> And now it is my responsibility to disable showing seconds in the taskbar??? If microsoft really wants to be green, windows shouldn’t do all these wasteful things!
and building multiple gigawatt consuming data centres to produce AI slop no-one asked for and no-one wants
powered by fossil fuels
"This is Windows 11, you'll need a new PC for it, throw away your old PC and wreck the planet some more, and by the way we'll stop supporting Windows 10 in October 2025, if your PC gets a malware and your bank account gets hacked and drained it's not our fault".
This is not Windows-specific, it has been shown wrt. Linux systems also. It's why recent Linux desktop environments have gotten rid of the blinking cursor in command prompt windows (that also causes frequent wakeups and screen updates) and why it probably makes sense to disable most animations too.
It's why recent Linux desktop environments have gotten rid of the blinking cursor in command prompt windows
This used to be done entirely in hardware (VGA text modes), and I believe some early GPUs had a feature to do that in graphics modes too.
There was a fight in Vista time frame about whether or not animated/video desktop backgrounds were a good idea. They were definitely cool, but AT WHAT COST. Ended up shipping as an "extra".
And nowadays we got people running Wallpaper Engine on their idling laptops in college classes ;)
I would check:
- Don't show ads (saves power)
- Don't call home (saves power)
This might be considered if they ever find out how shitty Windows can get before people actually stop buying computers with it.
As long as Red Hat keeps embracing and extending free desktop, and Apple keeps disallowing standard features like native Vulkan (Mac is not for games I get it but come on, please?), people will either keep using Windows or, more likely, switch to Android devices for their home and business needs.
Both things can be true/desirable at the same time.
If, as tested, this setting makes a double-digit percentage difference, I'm glad Microsoft exposes it in the UI. I'd also be glad if they didn't do as much weird stuff on their user's devices as they do.
Mentioning that some setting uses more power can be useful and desirable. I think Jaxan might be irked by "energy recommendations" Windows gives you in power & battery settings, though. It suggests applying "energy saving recommendations" to lower your carbon footprint, and while I absolutely support energy saving, I also find those "recommendations" obnoxious.
The recommendations suggest, among other things, switching to power-saving mode, turning on dark mode, setting screen brightness for energy efficiency, and auto-suspending and turning the screen off after 3 minutes.
Power-saving mode saves little at least on most laptops but has a significant performance impact, dark mode only saves power on LED displays (LCDs have a slight inverse effect), and both dark/light mode and screen brightness should be set based on ergonomics, not based on saving three watts.
When these kinds of recommendations are given to the consumer for "lowering your carbon footprint", with a green leaf symbol for impact, while Microsoft's data centres keep spending enormous amounts of power on data analysis, I find it hard to see that as anything more than greenwashing.
The test setting is important here - the test is on an otherwise idle machine. This means that the update ensures that some thread wakes on a timer every second which may explain the large drop. This test is interesting, but not very representative of a real world usage scenario. It’ll be interesting to compare it to the results of the other test they running, where they keep a video running in the background.
I'm still a little curious of what's causing the increase in power use. A single additional wakeup per second should not have a two-digit percentage impact on power use when even an idle machine is probably going to have dozens of wakeups per second anyway. I wonder if updating the seconds display somehow causes lots of extra wakeups instead.
> If, as tested, this setting makes a double-digit percentage difference, I'm glad Microsoft exposes it in the UI.
I'd rather them write more performant code. This feels like your car having the option to burn motor oil to show a more precise clock on the dash; you don't get kudos for adding an off-switch for that.
> I'd rather them write more performant code.
My expectations of Microsoft software aren't terribly high. I'd say Windows is performant (ie it works about as well as I expect).
It's really an on switch.
The feature is off by default in Windows 11 and was not offered in any previous non-beta Windows version.
But you could open the clock flyout and see it on demand. Now it's all-or-nothing (unless they changed it, again)
(Have I mentioned how much I loathe Windows 11?)
Better analogy would be reducing your MPGs (fuel efficiency) to show a more precise clock, and arguably we all make that sacrifice to get CarPlay.
Energy isn’t free.
Even if they wrote more performant code, it would just mean less relative loss of energy to show seconds but still loss compared to not showing seconds.
Of course it's not free - TANSTAAFL - but it should certainly not increase energy consumption by 13%!
> I'd rather them write more performant code.
In keeping with the theme of the comment you're replying to, writing better-performing code and providing performance options are not mutually exclusive. Both are good ideas.
> This feels like your car having the option to burn motor oil to show a more precise clock on the dash; you don't get kudos for adding an off-switch for that.
(Sounds more like you're arguing that it should be forced off instead of being an option? Reasonable take in this case, but not the same argument.)
It shouldn't take any noticable power/cycles to accomplish this task. Having flags for "performance" littered through the codebase and UI is a classic failure mode that leads to a janky slow base performance. "Do always and inhibit when not needed".
No, I think they’re arguing that showing seconds in the system tray shouldn’t be so inefficient that turning it off gives back double-digit percentage energy savings.
I think we all agree there needs to be some additional power draw for the seconds feature, but it’s unclear how much power is truly necessary vs this just being a poor implementation.
there's a dramatic increase in how frequently you interrupt the CPU to update the display. That is true at the OS level no matter how efficient you make the second display code.
This feels like your car having the option to burn motor oil to show a more precise clock on the dash
I actively don't want to see seconds; the constant updating is distracting. It should be an option even if there were no energy impact. (Ditto for terminal cursor blinking).
...Did you not see that it is an option, off by default?
Doesn't the blinking cursor tell you it's ready for input and not still running the previous command? Seems useful.
I have cursor blinking off anywhere I can. The prompt is what tells me I can type something, or in a GUI program, you can always type if there is a cursor no matter if it's solid or blinking. At least, that's my experience, perhaps you're familiar with another system or piece of software where the blinking is what tells you that you can enter something?
There are better shape/color alternatives for that
I had some very technical friends be incredibly surprised by the Edge form thing, I think that is not sufficiently called out!
They send any text you type in a form to their AI cloud and hold on to it for 30 days.
Any form.
On any website.
What the actual fuck?
> Use Windows
> Expect Privacy
> Don't get Privacy
SuprisedPikatchu.jpg
I was surprised by this. I don't use Edge much, and I don't remember being asked about it.
Any form meaning passwords too?
Looked into it, the answer seems like it can be both a yes or a no, depending on the website and user actions.
By default, when you implement a form that takes a password, you (the developer) are going to be using the "input" HTML element with the type "password". This element is exempt from spellchecking, so no issues there.
However, many websites also implement a temporary password reveal feature. To achieve this, one would typically change the type of the "input" element to "text" when clicking the reveal button, thereby unintentionally allowing spellchecking.
You (the developer) can explicitly mark an element to be ineligible for spellchecking by setting the "spellchecking" attribute to "false", remediating this quirk: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
You (the developer) can of course also just use a different approach for implementing a password reveal feature.
As the MDN docs remark, this infoleak vector is known as "spelljacking".
Whoa how is this not all over the news at all times?
People are tired of hearing about it. They don't feel like they can do anything about it.
Anyone remember when Ubuntu sent every keystroke to amazon?
We moved on and used alternatives. And stayed on alternatives.
The caring cohort has mortages and kids
And Linux for desktop is finally easy enough for those of us with both.
Microsoft ordered me to buy a new computer for Win 11, so I took said kids to Microcenter, asked for a machine whose specs could play a particular steam game on Linux, returned to my mortgage, installed Ubuntu and haven't given Windows a second thought in months.
Something I heard a while back but have never had confirmed is that the Nvidia driver sends the content of every window title to Nvidia.
Does anyone know if that is true?
This was when you had to create an account for GeForce Experience and they started sending crash stats.
Some people checked it with wireshark at the time and didn’t find anything other than what was stated. [0]
0: https://gamersnexus.net/industry/2672-geforce-experience-dat...
There was a smart tv that did that with the titles of any media played too wasn't there?
You can safely assume the same is happening with streaming sticks/boxes
This is only true if you enable extended spell checks, which makes some sense. By default, no form data is sent to Microsoft AFAIK. Note that the same holds for Google Chrome.
What setting is this? I can only find "Enable machine learning powered autofill suggestions" which seems to have defaulted to on.
Here you go, from the horse's mouth: https://www.microsoft.com/en-us/edge/learning-center/improve...
Note that this is from 2023. Their legal docs, last updated in 2024, claim a bit different: https://learn.microsoft.com/en-us/legal/microsoft-edge/priva...
> By default, Microsoft Edge provides spelling and grammar checking using Microsoft Editor. When using Microsoft Editor, Microsoft Edge sends your typed text and a service token to a Microsoft cloud service over a secure HTTPS connection. The service token doesn't contain any user-identifiable information. A Microsoft cloud service then processes the text to detect spelling and grammar errors in your text. All your typed text that's sent to Microsoft is deleted immediately after processing occurs. No data is stored for any period of time.
In what world does holding the user's private data for 30 days make sense for a spell checker? Even sending the data at all is sad. We've had offline spell checking for decades.
For the same reason Grammarly does it too, I'd assume.
This is often (though not always) blanket statement.
Logs are always generated, and logs include some amount of data about the user, if only environmental.
It's quite plausible that the spellchecker does not store your actual user data, but information about the request, or error logging includes more UGC than intended.
Note: I don't have any insider knowledge about their spellcheck API, but I've worked on similar systems which have similar language for little more than basic request logging.
Pii is stored _at most_ for 30 days.
To track when the user corrects it. Otherwise you can't adapt if somehow the correction is not what the user wanted.
If there are a bunch of these corrections you know something is wrong there. IMO 30 days is quite modest and if this is properly anonymized..
Edit: dear HN user who decided to silently downvote - you could do better by actually voicing your opinion
> dear HN user who decided to silently downvote - you could do better by actually voicing your opinion
Sure, I'll bite. Let's address the obvious issue first: what you're saying is speculation. I can only provide my own speculation in return, and then you might or might not find it agreeable, or at least claim either way. And there will be nothing I can do about it. I generally don't find this valuable or productive, and I did disagree with yours, hence my silent downvote.
But since you're explicitly asking for other people's speculation, here I go. Advanced "spellchecking" necessitates the usage of AI, as natural languages cannot ever be fully processed using just hard coded logic. This is not an opinion, you learn this when taking formal languages class at university. It arises from formal logic only being able to wrangle formal logic abiding things, which natural languages aren't (else they'd be called formal languages).
What the opinion is, and the speculation is, is that this is what the feature kicks off when it sends over input data to MS's servers for advanced "spellchecking", much like what I speculate Grammarly does too. Either that, or these services have some proprietary language engine that they'd rather keep on their own premises, because why put your moat out there if you don't strictly have to.
Technologically speaking, at this point it might be possible to do this locally, on-device now. This further didn't use to be the case I believe (although I do not have sources on this), and so this would be another reason why you'd send people's inputs to the shadow realm.
It’s hard to read writing packed with defensive clauses.
Better to say what you need to say. Leave the defense for the occasion someone misunderstood what you meant to say.
It's further pretty hard to write like this, but I still prefer it over getting trivially checkmated by ill meaning people, and over being misinterpreted silently and that causing issues downstream. It's at this point an instinctual defense mechanism, that I've grown to organically develop in the low-trust environments that are forums like this.
I 100% agree with the principle, but (regrettably) in practice you can't do this in a lot of places where the community is critical (which isn't a bad thing by itself) but doesn't call out/downvote/moderate bad criticism (which is bad).
I can't count the number of times on HN that I've seen responses to posts that took advantage of the poster not writing defensively to emotionally attack them in ways that absolutely break the HN guidelines, and weren't flagged or downvoted. And on other sites, like Reddit, it's just the norm.
The defensive writing will continue until morals improve.
Reminds me to a video I saw on YouTube from the "PC Security Channel", who was utterly flabbergasted that the Start Menu would send all keypresses inputted into its search bar to MS.
They had searching on the web enabled... Pretty hard to search the web using Bing without sending along a search term.
Stuff like that and the one you replied to are why I stopped caring. The outrage is so often complete and utter nonsense that my default response is disbelief.
It came enabled by default. It is not as if this setting was searched for, then enabled, then had some unintended consequence - taskbar searches used to not search the internet, then they did.
Which would be a perfectly fine thing to take issue with. It just also wouldn't be quite as eye-catching as misleadingly portraying the thing as now being a keylogger.