I hope he sticks with it. Insuring his collection could be a sticking point, but he may get you into sticky situations without insurance. I’m sure he’ll find an agent to stick up for him.
I hope he sticks with it. Insuring his collection could be a sticking point, but he may get you into sticky situations without insurance. I’m sure he’ll find an agent to stick up for him.
It depends on the role. My first job was doing manual QE on Windows, and knowing Linux wasn’t much help at the time, but it did help me transition to a coding role in the same company a year later. I’m now doing platform engineering at a major tech company, but that has a high barrier to entry, which I suspect is the case for most roles that are Linux-focused. If you’re trying to get your foot in the door, I think you should look at job profiles for low barrier to entry roles (e.g. tech support) and try to work your way up.
Probably because the individual engineers working on Takeout care about doing a good job, even though the higher-ups would prefer something half-assed. I work for a major tech company and I’ve been in that same situation before, e.g. when I was working on GDPR compliance. I read the GDPR and tried hard to comply with the spirit of the law, but it was abundantly clear everyone above me hadn’t read it and only cared about doing the bare minimum.
There’s no financial incentive for them to make is easy to leave Google. Takeout only exists to comply with regulations (e.g. digital markets act), and as usual, they’re doing the bare minimum to not get sued.
I know aleph-null guys. All in the same family. Parents were lazy and named all their kids after the positive integers. 42 is my best friend.
Wait till she learns about zombie children
That’s not what I’m saying at all. I’m saying the rich and powerful have a vested interest in not taking risks that jeopardize their power and wealth, because they have more to lose.
The reason these models are being heavily censored is because big companies are hyper-sensitive to the reputational harm that comes from uncensored (or less-censored) models. This isn’t unique to AI; this same dynamic has played out countless times before. One example is content moderation on social media sites: big players like Facebook tend to be more heavy-handed about moderating than small players like Lemmy. The fact small players don’t need to worry so much about reputational harm is a significant competitive advantage, since it means they have more freedom to take risks, so this situation is probably temporary.
Let me guess, its favorite band is sssssslayer