

All they had to do was run the tech alongside traditional cashiers. Make it known on entry, and your fine. No ethical concerns.
But what they did was sell tech they didnt have to shareholders to pump up the stock.
All they had to do was run the tech alongside traditional cashiers. Make it known on entry, and your fine. No ethical concerns.
But what they did was sell tech they didnt have to shareholders to pump up the stock.
LTT be like…
Sure, but you still shouldn’t be selling the technology as actually working, instead of developing.
Amazon bought whole foods a while back. What would have stopped them from just collecting the data in their own stores, and then developed the tech?
Hint: shareholder value.
Your study has no control. So how do we know that’s the best way to get more out of people? The page linked doesn’t even specify what job types.
I’d still wager they’d be better served by better applications, not AI.
Also not the same person.
Forced overtime comes to an easier, cheaper mind.
But how better done depends on the field. Me, having a faster computer reduces compile time. So NOT having AI overhead on my machine is more important.
People could get tons of flows improved by not abusing Excell as a database.
I don’t know, there’s a lot of racism and anti-Semitism in those books as well. I mean the money obsessed long nose goblins? Cho Chang?
They’re good stories, but they do reflect the work of a fundamentally biggoted author.
deleted by creator
Buddy I don’t know who fucked your mom or pissed in your corn flakes. But it wasn’t me.
All you’ve done is lay accusations to my character to me, while doing nothing of value.
Have the day you deserve
I said AI isn’t close in education. That was my entire claim
I never said anything about any other company. I said AI in education isn’t happening soon. You keep pulling in other sectors.
I’ve also had several comments in this thread before you came in saying that.
EDIT: give me a citation that LLMs can reason for code. Because in my experience as someone that professionally codes with AI (copilot) it’s not capable at that. It’s guess what it thinks I want to write in small segments.
https://x.com/leojr94_/status/1901560276488511759
Espcially when it has a nasty habit of leaking secrets.
EDIT2 forgot to say why I’m ignoring other fields. Because we’re not talking about AI in those fields. We’re talking education and search engines at best. My original comment was that AI generated educational papers still serve their original purpose.
What the fuck does that have to do with anything to do with plaintair?
My larger point, AI replacing teachers is at least a decade away.
You’ve given no evidence that it is. You’ve just said you hate my sources, while not actually making a single argument that it is.
You said well it stores context, but who cares? I showed that it doesn’t translate to what you think, and you said you don’t like, without providing any evidence that it means anything beyond looking good on a graph.
I’ve said several times, SHOW ME ITS CLOSE. I don’t care what law enforcement buys, because that has nothing to do with education.
As apposed to the nothing you’ve cited that context tokens actually improve reasoning?
I love how you keep going further and further away from the education topic at hand, and now brining in police survalinece, which everyone knows is 100% accurate.
Okay, here’s a non apple source since you want it.
https://arxiv.org/abs/2402.12091
5 Conclusion In this study, we investigate the capacity of LLMs, with parameters varying from 7B to 200B, to com- prehend logical rules. The observed performance disparity between smaller and larger models indi- cates that size alone does not guarantee a profound understanding of logical constructs. While larger models may show traces of semantic learning, their outputs often lack logical validity when faced with swapped logical predicates. Our findings suggest that while LLMs may improve their logical reason- ing performance through in-context learning and methodologies such as COT, these enhancements do not equate to a genuine understanding of logical operations and definitions, nor do they necessarily confer the capability for logical reasoning.
And still nothing peer reviewed to show?
Synethic benchmarks mean nothing. I don’t care how much context someone can store, when the context being stored is putting glue on pizza.
Again, I’m looking for some academic sources (doesn’t have to be stem, education would be preferred here) that the current tech is close to useful.
So you say I should be intellectually honest by doing the experiment myself, then say that my experiment is going to be shit anyways? Sure… That’s also intellectually honest.
Here’s the thing.
My education is in physics, not CS. I know enough to know what I try isn’t going to be really valid.
But unless you have peer reviewed searches to show otherwise, because I would take your home grown experiment to be as valid as mine.
If you assume the unlimited power needed right now to power Aloha fold at scale of all human education.
We have at best proof of concepts that computers can talk. But LLMs don’t have any way of actually knowing anything behind them. That’s kinda the problem.
And it’s not a “we’ll figure out the one trick” but more fundamentally how it works doesn’t allow for that to happen.
EDIT: you can literally get a PhD in many forms of education and have an entire career studying it.
Specialized AI like that is not what most people know as AI. Most people reffer to it as LLMs.
Specialized AI, like that showcased, is still decades away from generalized creative thinking. You can’t ask it to do a science experiment with in a class because it just can’t. It’s only built for math proof.
Again, my argument is that it won’t never exist.
Just that it’s so far off it’d be like trying to regulate smart phone laws in the 90s. We would have only had pipe dreams as to what the tech could be, never mind its broader social context.
So tall to me when it can, in the case of this thread, clinically validated ways of teaching. We’re still decades from that.
If you read, it’s capable of very little under the surface of what it is.
Show me one that is well studied, like clinical trial levels, then we’ll talk.
We’re decades away at this point.
My overall point of it’s just as meaningless to talk about now as it was in the 90s. Because we can’t convince of what a functioning product will be, never mind it’s context I’m a greater society. When we have it, we can discuss it then as we have something tangible to discuss. But where we’ll be in decades is hard to regulate now.
but what good is that if AI can do it anyway?
It can’t. It just fucking can’t. We’re all pretending it does, but it fundamentally can’t.
Creative thinking is still a long way beyond reasoning as well. We’re not close yet.
What are you talking about?
It was never AI. It was always cheap remote people working in foreign countries. But you would take that, and sell it as AI like they did?