

What about AV2 + Opus though!?
Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP). He/him.
(header photo by Brian Maffitt)


What about AV2 + Opus though!?


It’s a bit excessive for my taste as well. Traditionally if you felt the need to cut this much just to make the sentence come out the way you want, you’d just do another take instead of making this many cuts in post. Over-cutting of spacing also makes the pacing a bit too “word-vomit” rather than “polished” imo.
I imagine this is more normalized in stereotypically “zoomer” presentation of video content, but it might also just be this guy (or their editor’s) style.
Don’t worry, you can instead visit this reputable URL: https://cheap-bitcoin.online/scanner-hijacker/malicious-payload/trojan_extractor_tool.msi?firewall=tamper&id=11aa4591&origin=spoof&payload=(function(){+return+undefined%3B+})()%3B&sessiontoken=spoof&useragent=inject
( https://phishyurl.com/ via https://chaos.social/@FlohEinstein/115212955110814540 )


I actually think this video is doing a pretty bad job of summarizing the practical-comparison part of the paper.
If you go here you can get a GitHub link which in turn has a OneDrive link with a dataset of images and textures which they used. (This doesn’t include some of the images shown in the paper - not sure why and don’t really want to dig into it because spending an hour writing one comment as-is is already a suspicious use of my time.)
Using the example with an explicit file size mentioned in the video which I’ll re-encode with Paint.NET trying to match the ~160KB file size:
Hadriscus has the right idea suggesting that JPEG is the wrong comparison, but this type of low-detail image at low bit rates is actually where AVIF rather than JPEG XL shines. The latter (for this specific image) looks a lot worse at the above settings, and WebP is generally just worse than AVIF or JPEG XL for compression efficiency since it’s much older. This type of image is also where I would guess this type of compression / reconstruction technique also does comparatively well.
But honestly, the technique as described by the paper doesn’t seem to be trying to directly compete against JPEG which is another reason I don’t like that the video put a spotlight on that comparison; quoting the paper:
We also include JPEG [Wallace 1991] as a conventional baseline for completeness. Since our objective is to represent high-resolution images at ultra-low bitrates, the allow-able memory budget exceeds the range explored by most baselines.
Most image compression formats (with AVIF being a possible exception) aren’t tailored for “ultra-low bitrates”. Nevertheless, here’s another comparison with the flamingo photo in the dataset where I’ll try to match the 0.061 bpp low-side bit rate target (if I’ve got my math right that’s 255,860.544 bits):
(Ideally I would now compare this image at some of the other, higher bpp targets but I am le tired.)
It looks like interesting research for low bit rate / low bpp compression techniques and is probably also more exciting for anyone in the “AI compression” scene, but I’m not convinced about “Intel Just Changed Computer Graphics Forever!” as the video title.
As an aside, every image in the supplied dataset looks weird to me (even the ones marked as photos), as though it were AI-generated or AI-enhanced or something - not sure if the authors are trying to pull a fast one or if misuse of generative AI has eroded my ability to discern reality 🤔
edit: to save you from JPEG XL hell, here’s the JPEG XL image which you probably can’t view, but losslessly re-encoded to a PNG: https://files.catbox.moe/8ar1px.png
Really wish they published the whole dataset. They don’t specify on the page or in the paper what the full set was like, and the GitHub repo only has one of the easy-to-read ones. If >=10% of the set is comprised of clock faces designed not to be readable then fair enough.
The human level accuracy is less than 90%!?


Anecdotally, quite a lot of users vote “selfishly” and don’t care that downvoting reduces visibility. all and local feeds also fall victim to people voting as if these are their own personal curated feeds.
And I hate it 🫠
“We’re going to collect as much data about you as we can to sell to advertisers”
That’s a rather pessimistic interpretation of a privacy policy that starts with this:
The spirit of the policy remains the same: we aren’t here to exploit you or your info. We just want to bring you great new videos and creators to enjoy, and the systems we build to do that will sometimes require stuff like cookies.
and which in section 10 (Notice for Nevada Residents) says:
We do not “sell” personal information to third parties for monetary consideration [as defined in Nevada law] […] Nevada law defines “sale” to mean the exchange of certain types of personal information for monetary consideration to another person. We do not currently sell personal information as defined in the Nevada law.
So yes, I suppose they may be selling personal information by some other definition (I don’t know the Nevada law in question). But it feels extremely aggressive to label it a “shithole” that “collect[s] as much data about you as we can to sell to advertisers” based on the text of the privacy policy as provided.


I guess perspective here depends on your anchoring point. I’m anchoring mostly on the existing platform (YouTube), and Nebula’s policy here looks better (subjectively much better) than what runs as normal in big tech. If your anchor is your local PeerTube instance with a privacy policy that wasn’t written by lawyers, I can see how you’d not be a fan.
However beyond being in legalese I’m not sure what part of it you find so bad as to describe it as a shithole. Even compared to e.g., lemmy.world’s privacy policy Nebula’s looks “good enough” to me. They collect slightly more device information than I wish they did and are more open to having/using advertising partners than I had expected (from what I know of the service as someone who has never actually used it) but that’s like… pretty tame compared what most of the big platforms have.


Nebula is a shithole, just have a glance at their privacy policy.
It looks pretty run of the mill to me?
Can access fine (with reduced functionality) on my end with JS disabled - maybe you have something else tripping it up or something?
There is a deep irony covering this by writing about it… on Substack
It covers the breadth of problems pretty well, but I feel compelled to point out that there are a few times where things are misrepresented in this post e.g.:
Newegg selling the ASUS ROG Astral GeForce RTX 5090 for $3,359 (MSRP: $1,999)
eBay Germany offering the same ASUS ROG Astral RTX 5090 for €3,349,95 (MSRP: €2,229)
The MSRP for a 5090 is $2k, but the MSRP for the 5090 Astral – a top-end card being used for overclocking world records – is $2.8k. I couldn’t quickly find the European MSRP but my money’s on it being more than 2.2k euro.
If you’re a creator, CUDA and NVENC are pretty much indispensable, or editing and exporting videos in Adobe Premiere or DaVinci Resolve will take you a lot longer[3]. Same for live streaming, as using NVENC in OBS offloads video rendering to the GPU for smooth frame rates while streaming high-quality video.
NVENC isn’t much of a moat right now, as both Intel and AMD’s encoders are roughly comparable in quality these days (including in Intel’s iGPUs!). There are cases where NVENC might do something specific better (like 4:2:2 support for prosumer/professional use cases) or have better software support in a specific program, but for common use cases like streaming/recording gameplay the alternatives should be roughly equivalent for most users.
as recently as May 2025 and I wasn’t surprised to find even RTX 40 series are still very much overpriced
Production apparently stopped on these for several months leading up to the 50-series launch; it seems unreasonable to harshly judge the pricing of a product that hasn’t had new stock for an extended period of time (of course, you can then judge either the decision to stop production or the still-elevated pricing of the 50 series).
DLSS is, and always was, snake oil
I personally find this take crazy given that DLSS2+ / FSR4+, when quality-biased, average visual quality comparable to native for most users in most situations and that was with DLSS2 in 2023, not even DLSS3 let alone DLSS4 (which is markedly better on average). I don’t really care how a frame is generated if it looks good enough (and doesn’t come with other notable downsides like latency). This almost feels like complaining about screen space reflections being “fake” reflections. Like yeah, it’s fake, but if the average player experience is consistently better with it than without it then what does it matter?
Increasingly complex manufacturing nodes are becoming increasingly expensive as all fuck. If it’s more cost-efficient to use some of that die area for specialized cores that can do high-quality upscaling instead of natively rendering everything with all the die space then that’s fine by me. I don’t think blaming DLSS (and its equivalents like FSR and XeSS) as “snake oil” is the right takeaway. If the options are (1) spend $X on a card that outputs 60 FPS natively or (2) spend $X on a card that outputs upscaled 80 FPS at quality good enough that I can’t tell it’s not native, then sign me the fuck up for option #2. For people less fussy about static image quality and more invested in smoothness, they can be perfectly happy with 100 FPS but marginally worse image quality. Not everyone is as sweaty about static image quality as some of us in the enthusiast crowd are.
There’s some fair points here about RT (though I find exclusively using path tracing for RT performance testing a little disingenuous given the performance gap), but if RT performance is the main complaint then why is the sub-heading “DLSS is, and always was, snake oil”?
obligatory: disagreeing with some of the author’s points is not the same as saying “Nvidia is great”


Do you not see any value in engaging with views you don’t personally agree with? I don’t think agreeing with it is a good barometer for whether it’s post-worthy


I think you’ve tilted slightly too far towards cynicism here, though “it might not be as ‘fair’ as you think” is probably also still largely true for people that don’t look into it too hard. Part of my perspective is coming from this random video I watched not long ago which is basically an extended review of the Fairphone 5 that also looks at the “fair” aspect of things.
Misc points:
So yes, they are a long way from selling “100% fair” phones, but it seems like they’re inching the needle a bit more than your summary suggests, and that’s not nothing. It feels like you’ve skipped over lots of small-yet-positive things which are not simply “low economy of scale manufacturing” efforts.
Unfortunately it’s hard for the rest of us to tell if you actually think you want a video to save you from having to read 18 sentences or if you’re just taking the piss lol


For platforms that don’t accept those types of edits, the link OP tried to submit: https://www.theverge.com/news/690815/bill-gates-linus-torvalds-meeting-photo


That video of them interviewing people on the street with it was pretty fun!
So they literally agree not using an LLM would increase your framerate.
Well, yes, but the point is that at the time that you’re using the tool you don’t need your frame rate maxed out anyway (the alternative would probably be alt-tabbing, where again you wouldn’t need your frame rate maxed out), so that downside seems kind of moot.
Also what would the machine know that the Internet couldn‘t answer as or more quickly while using fewer resources anyway?
If you include the user’s time as a resource, it sounds like it could potentially do a pretty good job of explaining, surfacing, and modifying game and system settings, particularly to less technical users.
For how well it works in practice, we’ll have to test it ourselves / wait for independent reviews.
“for life” covers that eventuality :P