For payroll tax you can pressure your employer and/or claim an election that’s smaller than is real. For sales tax, you can replace some purchases with barter (and refuse to buy other things).
- 6 Posts
- 95 Comments
Online free textbook on tax strike practice, history, and philosophy: here
Artisian@lemmy.worldOPto
Technology@lemmy.world•Scientists Need a Positive Vision for AIEnglish
2·2 days agoIf I were to try and play up his argument, I might appeal to ‘we can shorten the dark times’, Asimov’s foundation style. But I admit my hearts not in it. Things will very likely get worse before they get better, partially because I don’t particularly trust anyone with the ability to influence things just a bit to actually use that influence productively.
I do think this oligarchy has very different tools than those of old; far fewer mercenary assassinations of labor leaders, a very different and weirdly shaped strangle-hold on media, and I put lower odds on a hot conflict with strikers.
I don’t know the history of hubris from oligarchs; were the Tsar’s or Barons also excited about any (absurd and silly) infrastructure projects explicitly for the masses? I guess there were the Ford towns in the amazon?
Artisian@lemmy.worldOPto
Technology@lemmy.world•Scientists Need a Positive Vision for AIEnglish
2·2 days agoI am primarily trying to restate or interpret Schneiers argument. Bring the link into the comments. I’m not sure I’m very good at it.
He points out a problem which is more or less exactly as you describe it. AI is on a fast track to be exploited by oligarchs and tyrants. He then makes an appeal: we should not let this technology, which is a tool just as you say, be defined by the evil it does. His fear is: “that those with the potential to guide the development of AI and steer its influence on society will view it as a lost cause and sit out that process.”
That’s the argument afaict. I think the “so what” is something like: scientists will do experiments and analysis and write papers which inform policy, inspire subversive use, and otherwise use the advantages of the quick to make gains against the strong. See the 4 action items that they call for.
Artisian@lemmy.worldOPto
Technology@lemmy.world•Scientists Need a Positive Vision for AIEnglish
1·2 days agoI think of it more like genie-out-of-lamp. It’s now very cheap to fine tune a huge model and deploy it. Policy and regulation need to deal with that fact.
Artisian@lemmy.worldOPto
Technology@lemmy.world•Scientists Need a Positive Vision for AIEnglish
1·2 days agoWere we mad at the public technologist?
Artisian@lemmy.worldOPto
Technology@lemmy.world•Scientists Need a Positive Vision for AIEnglish
11·2 days agoSuccess would lead to AI use that properly accounted for its environmental impact and had to justify it’s costs. That likely means much AI use stopping, and broader reuse of models that we’ve already invested in (less competition in the space please).
The main suggestion in the article is regulation, so I don’t feel particularly understood atm. The practical problem is that, like oil, LLM use can be done locally at a variety of scales. It also provides something that some people want a lot:
- Additional (poorly done) labor. Sometimes that’s all you need for a project
- Emulation of proof of work to existing infrastructure (eg, job apps)
- Translation and communication customization
It’s thus extremely difficult to regulate into non-existence globally (and would probably be bad if we did). So effective regulation must include persuasion and support for the folks who would most benefit from using it (or you need a huge enforcement effort, which I think has its own downsides).
The problem is that even if everyone else leaves the hole, there will still be these users. Just like drug use, piracy, or gambling, it’s easier to regulate when we make a central easy to access service and do harm reduction. To do this you need a product that meets the needs and mitigates the harms.
Persuading me I’m directionally wrong would require such evidence as:
- Everyone does want to leave the hole (hard, I know people who don’t. And anti-AI messaging thus far has been more about signaling than persuasion)
- That LLMs really can’t/can be made difficult to be done locally (hard, the Internet gives too much data, and making computing time expensive has a lot of downsides)
- Proposed regulation that would actually be enforceable at reasonable cost (haven’t thought hard about it, maybe this is easy?)
Artisian@lemmy.worldOPto
Technology@lemmy.world•Scientists Need a Positive Vision for AIEnglish
11·2 days agoI think the argument is that, like with climate, it’s really hard to get people to just stop. They must be redirected with a new goal. “Don’t burn the rainforests” didn’t change oil company behavior.
Artisian@lemmy.worldOPto
Technology@lemmy.world•Scientists Need a Positive Vision for AIEnglish
39·2 days agoI strongly agree. But I also see the pragmatics: we have already spent the billions, there is (anti labor, anti equality) demand for AI, and bad actors will spam any system that took novel text generation as proof of humanity.
So yes, we need a positive vision for AI so we can deal with these problems. For the record, AI has applications in healthcare accessibility. Translation, and navigation of beurocracy (including automating the absurd hoops insurance companies insist on. Make insurance companies deal with the slop) come immediately to mind.
Artisian@lemmy.worldto
Technology@lemmy.world•New Study: At Least 15% of All Reddit Content is Corporate Trolls Trying to Manipulate Public OpinionEnglish
2·6 days agoYour comment made me start looking through the other clearly ‘misinfo’ posts I’ve seen so far. All posted by the OP here. I’m gonna block him.
Artisian@lemmy.worldto
Technology@lemmy.world•New Study: At Least 15% of All Reddit Content is Corporate Trolls Trying to Manipulate Public OpinionEnglish
1·6 days agoApparently the first link to a ‘pew study’ is wrong (it goes to pew, but doesn’t mention reddit much). See here
Artisian@lemmy.worldto
Technology@lemmy.world•New Study: At Least 15% of All Reddit Content is Corporate Trolls Trying to Manipulate Public OpinionEnglish
1·6 days agoOr possibly the article is AI slop (at least, can someone find the pew survey they lead with and claim to base the headline from? see here)
Artisian@lemmy.worldto
Technology@lemmy.world•New Study: At Least 15% of All Reddit Content is Corporate Trolls Trying to Manipulate Public OpinionEnglish
2·6 days agoJust noting that the links inside the article seem to be wrong: https://lemmy.world/post/38174729/20270142
Artisian@lemmy.worldto
Technology@lemmy.world•New Study: At Least 15% of All Reddit Content is Corporate Trolls Trying to Manipulate Public OpinionEnglish
1·6 days agoOh wow that’s very bad.
Thank you for trying to hunt down the poll. I appreciate it.
Artisian@lemmy.worldto
Technology@lemmy.world•New Study: At Least 15% of All Reddit Content is Corporate Trolls Trying to Manipulate Public OpinionEnglish
1·6 days agoPlease do correct me if I’m reading poorly; but the first subheaded section in the article doesn’t claim to be quoting a summary of experts, it is quoting a pew poll of 2.5k typical americans and whether they see ‘corporate trolls’ on reddit. If you click through the pew link, I see that Pew has a much longer article of expert opinions on this, with the topics covering many social media sites and phenomena. That includes a survey of 1.3k experts, but it is also weird: 42% claim online climate wont change substantially in the next decade?
Artisian@lemmy.worldto
Technology@lemmy.world•New Study: At Least 15% of All Reddit Content is Corporate Trolls Trying to Manipulate Public OpinionEnglish
2·6 days agoI think effect is really not obvious. Could you explain what makes you feel this way? Consider:
- People still on the platform probably care little about it. We left, so there’s probably a survivor bias?
- Bots have substantially more technology to ‘seamlessly’ hide than even a few years ago.
- Companies have more direct ways to advertise (sponsored answers, for eg) that I don’t think are counted by the survey. Maybe fewer are buying bots/karma farming/DM spamming.
My gut feelings are pessimistic. But I would like my beliefs to be a little more grounded than that.
Artisian@lemmy.worldto
Technology@lemmy.world•New Study: At Least 15% of All Reddit Content is Corporate Trolls Trying to Manipulate Public OpinionEnglish
2·6 days agoI’d be interested in seeing a report of the change in these numbers; I’m guessing there’s not been much.
Artisian@lemmy.worldto
Technology@lemmy.world•New Study: At Least 15% of All Reddit Content is Corporate Trolls Trying to Manipulate Public OpinionEnglish
8·6 days agoNote their methodology for this study, afaict, also would entirely miss subtle stuff.
Either the point about frequency is valid, or this is a weak headline, no?
Artisian@lemmy.worldto
Technology@lemmy.world•New Study: At Least 15% of All Reddit Content is Corporate Trolls Trying to Manipulate Public OpinionEnglish
4·6 days agoIt feels like the headline reinforces my first urge, so feeling a bit on guard.
I’m not sure how you operationalize (or falsify) ‘15% of people interacted with folks who they think like companies’.





Practice those tasks in a large, already functional open source project. Try to shadow people who already do them. Very relevant link for libreoffice contributors of all kinds
Relevant more general guide (which doesn’t look too out of date) here
Lastly I’ll mention joining organizations adjacent to what you are already excited about working in. I do research code and joined an organization for data scientists in academia. They do regular training and events in things like UI design and documentation.