I neeeeeeed it. This looks a lot like CrossCode but refined. It has all the puzzles and scenery and build trees and I want to play it now
I neeeeeeed it. This looks a lot like CrossCode but refined. It has all the puzzles and scenery and build trees and I want to play it now
It’s definitely not the latter. It’s a fancy antivirus known as an EDR - Endpoint Detection and Response. Purely security software for defending against cyber attacks
I want to clarify something that you hinted at in your post but I’ve seen in other posts too. This isn’t a cloud failure or remotely related to it, but a facet of a company’s security software suite causing crippling issues.
I apologize ahead of time, when I started typing this I didn’t think it would be this long. This is pretty important to me and I feel like this can help clarify a lot of misinformation about how IT and software works in an enterprise.
Crowdstrike is an EDR, or Endpoint Detection and Response software. Basically a fancy antivirus that isn’t file signature based but action monitoring based. Like all AVs, it receives regular definition updates around once an hour to anticipate possible threat actors using zero-day exploits. This is the part that failed, the hourly update channel pushed a bad update. Some computers escaped unscathed because they checked in either right before the bad update was pushed or right after it was pulled.
Another facet of AVs is how they work depends on monitoring every part of a computer. This requires specific drivers to integrate into the core OS, which were updated to accompany the definition update. Anything that integrates that closely can cause issues if it isn’t made right.
Before this incident, Crowdstrike was regarded as the best in its class of EDR software. This isn’t something companies would swap to willy nilly just because they feel like it. The scale of implementing a new security software for all systems in an org is a huge undertaking, one that I’ve been a part of several times. It sucks to not only rip out the old software but also integrate the new software and make sure it doesn’t mess up other parts of the server. Basically companies wouldn’t use CS unless they are too lazy to change away, or they think it’s really that good.
EDR software plays a huge role in securing a company’s systems. Companies need this tech for security but also because they risk failing critical audits or can’t qualify for cybersecurity insurance. Any similar software could have issues - Cylance, Palo Alto Cortex XDR, Trend Micro are all very strong players in the field too and are just as prone to having issues.
And it’s not just the EDR software that could cause issues, but lots of other tech. Anything that does regular definition or software updating can’t or shouldn’t be monitored because of the frequency or urgency of each update would be impractical to filter by an enterprise. Firewalls come to mind, but there could be a lot of systems at risk of failing due to a bad update. Of course, it should fall on the enterprise to provide the manpower to do this, but this is highly unlikely when most IT teams are already skeleton crews and subject to heavy budget cuts.
So with all that, you might ask “how is this mitigated?” It’s a very good question. The most obvious solution “don’t use one software on all systems” is more complicated and expensive than you think. Imagine bug testing your software for two separate web servers - one uses Crowdstrike, Tenable, Apache, Python, and Node.js, and the other uses TrendMicro, Qualys, nginx, PHP, and Rust. The amount of time wasted on replicating behavior would be astronomical, not to mention unlikely to have feature parity. At what point do you define the line of “having redundant tech stacks” to be too burdensome? That’s the risk a lot of companies take on when choosing a vendor.
On a more relatable scale, imagine you work at a company and desktop email clients are the most important part of your job. One half of the team uses Microsoft Office and the other half uses Mozilla Thunderbird. Neither software has feature parity with the other, and one will naturally be superior over the other. But because the org is afraid of everyone getting locked out of emails, you happen to be using “the bad” software. Not a very good experience for your team, even if it is overall more reliable.
A better solution is improved BCDR (business continuity disaster recovery) processes, most notably backup and restore testing. For my personal role in this incident, I only have a handful of servers affected by this crisis for which I am very grateful. I was able to recover 6 out of 7 affected servers, but the last is proving to be a little trickier. The best solution would be to restore this server to a former state and continue on, but in my haste to set up the env, I neglected to configure snapshotting and other backup processes. It won’t be the end of the world to recreate this server, but this could be even worse if this server had any critical software on it. I do plan on using this event to review all systems I have a hand in to assess redundancy in each facet - cloud, region, network, instance, and software level.
Laptops are trickier to fix because of how distributed they are by nature. However, they can still be improved by having regular backups taken of a user’s files and testing that Bitlocker is properly configured and curated.
All that said, I’m far from an expert on this, just an IT admin trying to do what I can with company resources. Here’s hoping Crowdstrike and other companies greatly improve their QA testing, and IT departments finally get the tooling approved to improve their backup and recovery strategies.
If it’s any consolation, this is the first issue of its kind in the multiple years we’ve been using CS. Still unacceptable, but historically the program has been stable and effective for us. Hopefully this reminds higher ups the importance of proper testing before releases
This occurred overnight around 5am UTC/1am EDT. CS checks in once an hour, so some machines escaped the bad update. If your machines were totally off overnight, consider yourself lucky
Guys, gals, and enby pals is pretty inclusive and rhymes
The subtitle of the article says it’s not available in the US -
PC Manager app is only available in some regions, but could come to the US eventually
Mine is already talking about this news in a negative light. Makes my life easier to bring in opentofu
That was early access. The full release is soon (q2 2024) according to their Steam page
It does need other iPhones nearby that have internet connection. We got a handful to test for family during our trips last November even though we both use Android. They didn’t report in when we were away from other people, but kept location decently when in crowded places like the airport. Android has ways to detect when they are following you, but don’t participate in reporting metrics to the source (maybe that’ll change with upcoming Find My Device features in Android 15)
Given the Steam Link still gets updates, I wouldn’t worry about the Deck for at least a console generation’s lifetime
Chiaki4deck is PS Remote Play for Linux. It’s pretty nifty
The only hope I have is that Yoship is doing everything in his power to keep that from happening. Otherwise I’d absolutely expect it to be one after what 7R went through
Sorry, I don’t really have any for PC, we played Pico using remote coop
By comparison, there were a few systems that had issues on February 29th because of leap day. Issues with such a routine thing in this current day should be unthinkable.
Pico Park made our group rage in the best way. It’s a cute and fun game
If there’s an option on the AP to not permit link local routing within a vlan/ssid, that will force all traffic up to the firewall. Then you can block intrazone traffic at the firewall level for that vlan.
I’ve seen this in Meraki hardware where it’s referred to as “client isolation”. Ubiquiti might be able to do this too.
FFXIV handles it “okay”, in that you get a large portion of glam in game and the cash shop stuff is largely excess. There are a few cases where it would have been better to have the reward in game, but for the most part I feel like I can play the game without needing to buy anything.
The impression the community gets is the cash shop is a begrudging feature that SE higher ups mandate to keep cash flow going (because XIV is funding most of SE’s other projects)
In shorter terms to what the other comment said, your website won’t work in networks that use DNS served by your DC. The website is fine on the Internet, but less so at home or at an office/on a VPN if you’re an enterprise.
“I can’t go to example.com on the VPN!” was a semi common ticket at my last company 🙃
Discord server owners can choose to have their members require account verification before joining as an anti-bot measure.