They would start to “seriously consider the possibility that perhaps something was not right”
They would start to “seriously consider the possibility that perhaps something was not right”
I use this setup for my personal passwords, using nextcloud as the sync solution. A semi-fix for that was using Keepass2Android (on Android obviously). It integrates with nextcloud directly, keep a local DB of passwords, and would only load the remote one (and merge) on unlock and updates, not keeping it “constantly” sync on every remote change. It works well… most of the time… with only two devices that almost always have connection to the server… and for only one user.
It’s overly clunky though. It’s the big advantage of “service based” password manager against “single file based” ones. They handle sync. We have plans to move to bitwarden at my workplace, and since the client supports multiple accounts on multiple servers, I’ll probably move to that for personal stuff too. The convenience is just there, without downside.
Except for the part that it’s not a question of trust (being open source), there’s no third-party architecture to trust (it can and should be self-hosted), the data on the server are also encrypted client-side before leaving your device, sure.
Oh, and you also get proper sync, no risk of desync if two devices gets a change while offline without having to go check your in-house sync solution, easy share between user (still with no trust needed in the server), all working perfectly with good user UI integration for almost every systems.
Yeah, I wonder why people bother using that, instead of deploying clunky, single-user solution.
Not exactly, no. From other comments, it also have an incredibly high false positive rate, so it’s negative security.
Look, we can either look at facts and check the claims of that company that we’re going to invest a lot of money into, or we can accept their bribe and move on. It’s all about efficiency.
Some footage of tesla’s full self driving disagrees.
AI will not find a magic solution. Besides, we already have quite a few directions that would help, but we’re not acting on them. Pilling more “solutions” over them won’t change that.
This really sounds like the parody of rich people that think they can eat and breath safely as long as they have money, the rest of the world be damned.
Move, yeah. To Firefox… meh. The writing’s not on the wall yet, but we’re not going to ignore the very heavy signaling Mozilla has been doing for years now.
Spec says 4.
You’re right, they aren’t google. Not for lack of trying though.
You see posts putting some shade over Mozilla, and your immediate reaction is “it feels almost coordinated”. Well, that may be. But it would be hard to distinguish a “coordinated attack” from a “that’s just the things they’re doing, and there’s report on it” article, no? Especially when most of it can be fact-checked.
In this particular case, those abandoned projects got picked up by other… sometimes. And sometimes not. But they were abandoned. There’s no denying that.
If you want some more hot water for Mozilla, since you’re talking about privacy and security, you’d be interested in their recent switch regarding these points. Sure, the PR is all about protecting privacy and users, but looking into the acts, the message is a bit more diluted. And there’s always a fair amount of people that are ready to do the opposite of what you claims; namely discarding all criticism because “Mozilla”, when the same criticism are totally fair play when talking about other big companies.
Being keen on maintaining user privacy, system security, and trust, is not the same as picking a “champion” and sticking to it until the end. Mozilla have been doing shady things for half a decade now, and they should not get a free pass because they’re still the lesser evil for now.
We’ve always been good at walking away, closing our ears, turning a blind eye…
No. We’re all waiting for this guy to activate it so we can get to work.
Even better, they took actual extensions and made them built-in and impossible to remove. The work was already done to keep a lightweight browser with extra features in option, and they reverted it.
It’s been going for years now. We just don’t want to move away because, frankly, there’s little viable alternatives.
“curated wallpapers” including random generated stuff, and “shares profits” on a 50/50 basis, for a shitty app developed by what looks like three fivers in a trench coat.
The point is, they don’t get “competent”. They get better at assembling pieces they were given. And a proper stack with competent developers will already have moved that redundancy out of the codebase. For whatever remains, thinking is the longest part. And LLM can’t improve that once the problem gets a tiny bit complex. Of course, I could end up having a good rough idea of what the code should look like, describe that to an LLM, and have it write actual code with proper variable names and all, but once I reach the point I can describe accurately the thing I want, it’s usually as fast to type it. With the added value that it’s easier to double check.
What remains is providing good insight on new things, and understanding complex requirements. While there is room for improvement, it seems more and more obvious that LLM are not the answer: theoretically, they are not the right tool, and seeing the various level of improvements we’re seeing, they definitely did not prove us wrong. The technology is good at some things, but not at getting “competent”.
Also, you sweep out the privacy and licensing issues, which are big no-no too.
LLM have their uses, I outline some. And in these uses, there are clear rooms for improvements. For reference, the solution I currently use puts me at accepting around 10% of the automatic suggestions. Out of these, I’d say a third needs reworking. Obviously if that moved up to like, 90% suggestions that seems decent and with less need to fix them afterward, it’d be great. Unfortunately, since you can’t trust these, you would still have to review the output carefully, making the whole operation probably not that big of a time saver anyway.
Coding doesn’t allow much leeway. Other activities which allow more leeway for mistakes can probably benefit a lot more. Translation, for example, can be acceptable, in particular because some mishaps may automatically be corrected by readers/listeners. But with code, any single mistake will lead to issues down the way.
It is perfectly possible to run anti-cheat that are roughly as good (or as bad, as it often turns out) without full admin privilege and running as kernel-level drivers. Coupled with server-side validation, which seems to be a dying breed, you’d also weed out a ton of cheaters while missing the most motivated of them.
As someone who lurks around in different communities (to some extent; Steam forums, reddit, lemmy, mastodon, and a few game-centered discord servers), the issue is not much against anti-cheat for online play. It’s the nature of these piece of software that is the issue. It would not be the same if the anti-cheat was also forced on solo gameplay, but it is not the case here.
(bonus points for systems that allow playing on non-protected servers, but that’s asking a bit too much from some publishers I suppose)
Aside from it being code you don’t want on your machine
Code you don’t want on your machine, that have sometimes more permissions than you yourself have on your own files, is completely opaque, and have the legitimacy to keep constant outgoing network data that you can’t audit.
Yes, aside for that, no reason at all. No problem with a huge risk on your privacy for moderate results that don’t particularly benefit you in the long run.
(and all that is assuming that they’re not nefarious to begin with, which is almost impossible to prove)
systemd, as a service manager, is decent. Not necessarily a huge improvement for most use cases.
systemd, the feature creep that decides to pull every single possible use case into itself to manage everything in one place, with qwirks because making a “generic, do everything” piece of software is not a good idea, is not that great.
systemd, the group of tools that decided to manage everything by rewriting everything from scratch and suffering from the same issue that were fixed decades ago, just because “we can do better” while changing all well known interfaces and causing a schism with either double workload or dropping support for half the landscape from other software developer is really stupid.
If half the energy that got spent in the “systemd” ecosystem was spent in existing projects and solutions that already addressed these same issues, it’s likely we’d be in a far better place. Alas, it’s a new ecosystem, so we spend a lot of energy getting to the same point we were before. And it’s likely that when we get close to that, something new will show up and start the cycle again.
…you know people made fake pictures before image generation, right?