

In a very real sense, applicants are first and foremost deciding if it works. If they can do something resembling standing together, and refuse at any reasonable scale to take part in AI making hiring decisions, it will fail.
In a very real sense, applicants are first and foremost deciding if it works. If they can do something resembling standing together, and refuse at any reasonable scale to take part in AI making hiring decisions, it will fail.
A job at a company that won’t respect your basic humanity isn’t worth having. If you’d rather willingly step into that trap than proceed with whatever you’re doing, or go with other options, are you okay? Like if this sounds like an opportunity and not a giant red flag, I wish there was something I could offer to help you.
Honestly? Especially if it was only for cops
You don’t get to be a billionaire without some malfeasance.
And even if you don’t assume actively malicious intent like you should with Musk, there’s a lot of potential danger with technology like this, and if you don’t stand a lot to gain, and have reasonable controls against things going wrong, it’s probably not a good idea to be an early adopter. It’s just like a pacemaker, there are a narrow segment of people who should want to test a new model/concept for them.
Speed cameras are a privacy issue that doesn’t solve the problem of speeding. People are most comfortable driving the speed the road is designed for, and if that speed is too high, the solution is to modify the road for a safer speed. The speeders in your example are right here, for the wrong reason; speed cameras should be rare if they’re allowed to exist at all. They have, at most, a short term benefit, and broad public surveillance is a very serious issue they contribute to.
I was one of the people who went to college to learn things, but the more I learn, the more I’m saddened by all the people I went to school with who studied things they didn’t enjoy, didn’t particularly care to get better at, all because they saw it as a way to make money. In optimizing for money, they miss out on learning and fulfillment.
This wasn’t that long ago, but I can only imagine how much heavy GenAI use could intensify that effect
Imagine borrowing $200k for an education, and then doing as little work as you can to actually learn the things you’re paying to know
If a problem exists, and you try to fix it without AI, do you even stand a chance at getting promoted?
Since I’m so inexperienced as a CEO, I’d even do it for a tenth of a percent of his pay. I’d find a way to scrape by on several million a year.
Why do you think the cost of paying out of pocket is so high? Private insurance bears a significant part of the responsibility for causing that problem.
Structurally, yes, you need a system that amounts to healthy people saving, and sick people being taken care of from those savings, whether it’s individual or social. But our current system of private for-profit insurance is about as bad as such a system could be while still technically sorta working.
It sounds like they sent emails to the district and made some noise in online spaces that made their intentions clear. If it was just wearing wristbands as silent protest, we’d never have known, but they told the district via email, the general public online that they were going to do someone bigoted, and then they did a minor version of it.
Imagining the perspective of an administrator, they really should do something about that to protect their students. And it seems like they went with a temporary ban, which seems proportionate.
It’s not that there wasn’t any political pressure. It’s that the slightest bit of pressure caused them to pull the plug swiftly.
I think the companies who were led by people personally antagonistic to DEI already weren’t doing it. They started it when the political winds were in favor of DEI, found that it did something beneficial for them that was worth the investment (ultimately, increasing profits, probably through PR) and reaped what they could. But the slightest headwinds caused them to drop it, for lack of confidence it would be worth the continued investment. For others, it was beneficial enough this pressure didn’t change their decisions.
None of this is likely coming from company leaders caring about DEI for some sort of principled reason, just companies who care about only one thing, reassessing the value of DEI in terms of that one thing, $ return on spend. This is a group who needs subtler treatment than the anti-DEI crowd, this is fair weather friends who don’t care. What little we can do is reward those who don’t give in to the slightest push.
The way Java is practically written, most of the overhead (read: inefficient slowdown) happens on load time, rather than in the middle of execution. The amount of speedup in hardware since the early 2000s has also definitely made programmers less worried about smaller inefficiencies.
Languages like Python or JavaScript have a lot more overhead while they’re running, and are less well-suited to running a server that needs to respond quickly, but certainly can do the job well enough, if a bit worse compared to something like Java/C++/Rust. I suspect this is basically what they meant by Java being well-suited.
I’ve been using Cinnamon for most of the last decade, but switched to Gnome3 recently, heavily customized to work like Cinnamon. Basically because Wayland is finally stable enough to use.
If Cinnamon gets Wayland support working well, that’s my choice. Otherwise I’ve got some Gnome3 configs that make it work pretty well, and I’d happily run it into the ground too.
two billionaires, unfathomably rich individuals, in the last ten years that having a good public image was overrated, but decided they’d rather use their platform to hurt people, and alienate anyone who liked them, but wasn’t a raging bigot. is the allure of being mean to people on twitter.com that great?
if Musk had shut up just 5 years ago, he’d probably have more money, more respect, but somewhat less power. instead he’s become the guy a lot of people are excited to see have a total breakdown, and hopefully lose everything.
There’s an important distinction here: “is a good idea” is not “is the right way to do it”. You can also keep kids off of dating apps by banning dating apps, banning children from the Internet, or even just banning children. All of those are horrible solutions, but they achieve the goal.
The goal should be to balance protecting kids with minimizing collateral damage. Forcing adults to hand over significant amounts of private data to prove their identity has the same basic fault as the hyperbolic examples, that it disregards the collateral damage side of the equation.
It’s all about the implementation. The Washington bill is treating diet products as similar to alcohol (check ID in-store and on delivery), which seems fine to me.
The NY law seems to be suggesting that dating app services need to collect (and possibly retain) sensitive information on people, like identification, location data. That’s troubling to me.
I think I basically agree with you and the author here. People applying technology have a responsibility to apply it in ways that are constructive, not harmful. Technology is a force multiplier, in that it makes it easy to achieve goals, in a value neutral sense.
But way too many people are applying technology in evil ways, extracting value instead of creating it, making things worse rather than better. It’s an epidemic. Tech can make things better, and theoretically it should, but lately, it’s hard to say it has, on the net.
n a normal administration I think you’re right, but this isn’t a normal administration. Officials who take an oath are sworn to uphold the constitution, not to follow orders from the president. Soldiers have a duty to disobey illegal orders, and DOJ attorneys have similar traditions.
If the president and top Justice department officials are knowingly and repeatedly ordering them to take actions that are clearly illegal, and are publicly known to be doing so… they’re not whistleblowers, they’re conscientious objectors to a criminal enterprise being run openly by public officials.
It’s a structural challenge more than a fallacy, but I don’t entirely disagree. This sort of thing works best when one of two things is true, there’s some way for people to organize, or it’s relatively small and there are real options.
The former clearly isn’t true here, but I think the latter is. There’s a lot of companies trying things with AI, and some are working better or worse. This particular use is relatively small, and I think the downside of doing it is also small in the short term. (This is a giant red flag, avoiding a red flag isn’t a large cost)