Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP). He/him.

(header photo by Brian Maffitt)

  • 0 Posts
  • 22 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle


  • It’s a bit excessive for my taste as well. Traditionally if you felt the need to cut this much just to make the sentence come out the way you want, you’d just do another take instead of making this many cuts in post. Over-cutting of spacing also makes the pacing a bit too “word-vomit” rather than “polished” imo.

    I imagine this is more normalized in stereotypically “zoomer” presentation of video content, but it might also just be this guy (or their editor’s) style.



  • I actually think this video is doing a pretty bad job of summarizing the practical-comparison part of the paper.

    If you go here you can get a GitHub link which in turn has a OneDrive link with a dataset of images and textures which they used. (This doesn’t include some of the images shown in the paper - not sure why and don’t really want to dig into it because spending an hour writing one comment as-is is already a suspicious use of my time.)

    Using the example with an explicit file size mentioned in the video which I’ll re-encode with Paint.NET trying to match the ~160KB file size:

    Hadriscus has the right idea suggesting that JPEG is the wrong comparison, but this type of low-detail image at low bit rates is actually where AVIF rather than JPEG XL shines. The latter (for this specific image) looks a lot worse at the above settings, and WebP is generally just worse than AVIF or JPEG XL for compression efficiency since it’s much older. This type of image is also where I would guess this type of compression / reconstruction technique also does comparatively well.

    But honestly, the technique as described by the paper doesn’t seem to be trying to directly compete against JPEG which is another reason I don’t like that the video put a spotlight on that comparison; quoting the paper:

    We also include JPEG [Wallace 1991] as a conventional baseline for completeness. Since our objective is to represent high-resolution images at ultra-low bitrates, the allow-able memory budget exceeds the range explored by most baselines.

    Most image compression formats (with AVIF being a possible exception) aren’t tailored for “ultra-low bitrates”. Nevertheless, here’s another comparison with the flamingo photo in the dataset where I’ll try to match the 0.061 bpp low-side bit rate target (if I’ve got my math right that’s 255,860.544 bits):

    • Original PNG (2,811,804 bytes) https://files.catbox.moe/w72nsv.png
    • AVIF; as above but quality 30 (31,238 bytes) https://files.catbox.moe/w2k2eo.avif
    • JPEG XL could not go below ~36KB even at quality 0 when using my available encoder, so I considered it to fail this test
    • JPEG (including when using MozJPEG, which is generally more efficient than “normal” JPEG) and WebP could only hit the target file size by looking garbage, so I considered them to fail this test out of hand

    (Ideally I would now compare this image at some of the other, higher bpp targets but I am le tired.)

    It looks like interesting research for low bit rate / low bpp compression techniques and is probably also more exciting for anyone in the “AI compression” scene, but I’m not convinced about “Intel Just Changed Computer Graphics Forever!” as the video title.


    As an aside, every image in the supplied dataset looks weird to me (even the ones marked as photos), as though it were AI-generated or AI-enhanced or something - not sure if the authors are trying to pull a fast one or if misuse of generative AI has eroded my ability to discern reality 🤔


    edit: to save you from JPEG XL hell, here’s the JPEG XL image which you probably can’t view, but losslessly re-encoded to a PNG: https://files.catbox.moe/8ar1px.png





  • “We’re going to collect as much data about you as we can to sell to advertisers”

    That’s a rather pessimistic interpretation of a privacy policy that starts with this:

    The spirit of the policy remains the same: we aren’t here to exploit you or your info. We just want to bring you great new videos and creators to enjoy, and the systems we build to do that will sometimes require stuff like cookies.

    and which in section 10 (Notice for Nevada Residents) says:

    We do not “sell” personal information to third parties for monetary consideration [as defined in Nevada law] […] Nevada law defines “sale” to mean the exchange of certain types of personal information for monetary consideration to another person. We do not currently sell personal information as defined in the Nevada law.

    So yes, I suppose they may be selling personal information by some other definition (I don’t know the Nevada law in question). But it feels extremely aggressive to label it a “shithole” that “collect[s] as much data about you as we can to sell to advertisers” based on the text of the privacy policy as provided.


  • I guess perspective here depends on your anchoring point. I’m anchoring mostly on the existing platform (YouTube), and Nebula’s policy here looks better (subjectively much better) than what runs as normal in big tech. If your anchor is your local PeerTube instance with a privacy policy that wasn’t written by lawyers, I can see how you’d not be a fan.

    However beyond being in legalese I’m not sure what part of it you find so bad as to describe it as a shithole. Even compared to e.g., lemmy.world’s privacy policy Nebula’s looks “good enough” to me. They collect slightly more device information than I wish they did and are more open to having/using advertising partners than I had expected (from what I know of the service as someone who has never actually used it) but that’s like… pretty tame compared what most of the big platforms have.





  • MHLoppy@fedia.iotoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    33
    arrow-down
    8
    ·
    5 months ago

    It covers the breadth of problems pretty well, but I feel compelled to point out that there are a few times where things are misrepresented in this post e.g.:

    Newegg selling the ASUS ROG Astral GeForce RTX 5090 for $3,359 (MSRP: $1,999)

    eBay Germany offering the same ASUS ROG Astral RTX 5090 for €3,349,95 (MSRP: €2,229)

    The MSRP for a 5090 is $2k, but the MSRP for the 5090 Astral – a top-end card being used for overclocking world records – is $2.8k. I couldn’t quickly find the European MSRP but my money’s on it being more than 2.2k euro.

    If you’re a creator, CUDA and NVENC are pretty much indispensable, or editing and exporting videos in Adobe Premiere or DaVinci Resolve will take you a lot longer[3]. Same for live streaming, as using NVENC in OBS offloads video rendering to the GPU for smooth frame rates while streaming high-quality video.

    NVENC isn’t much of a moat right now, as both Intel and AMD’s encoders are roughly comparable in quality these days (including in Intel’s iGPUs!). There are cases where NVENC might do something specific better (like 4:2:2 support for prosumer/professional use cases) or have better software support in a specific program, but for common use cases like streaming/recording gameplay the alternatives should be roughly equivalent for most users.

    as recently as May 2025 and I wasn’t surprised to find even RTX 40 series are still very much overpriced

    Production apparently stopped on these for several months leading up to the 50-series launch; it seems unreasonable to harshly judge the pricing of a product that hasn’t had new stock for an extended period of time (of course, you can then judge either the decision to stop production or the still-elevated pricing of the 50 series).


    DLSS is, and always was, snake oil

    I personally find this take crazy given that DLSS2+ / FSR4+, when quality-biased, average visual quality comparable to native for most users in most situations and that was with DLSS2 in 2023, not even DLSS3 let alone DLSS4 (which is markedly better on average). I don’t really care how a frame is generated if it looks good enough (and doesn’t come with other notable downsides like latency). This almost feels like complaining about screen space reflections being “fake” reflections. Like yeah, it’s fake, but if the average player experience is consistently better with it than without it then what does it matter?

    Increasingly complex manufacturing nodes are becoming increasingly expensive as all fuck. If it’s more cost-efficient to use some of that die area for specialized cores that can do high-quality upscaling instead of natively rendering everything with all the die space then that’s fine by me. I don’t think blaming DLSS (and its equivalents like FSR and XeSS) as “snake oil” is the right takeaway. If the options are (1) spend $X on a card that outputs 60 FPS natively or (2) spend $X on a card that outputs upscaled 80 FPS at quality good enough that I can’t tell it’s not native, then sign me the fuck up for option #2. For people less fussy about static image quality and more invested in smoothness, they can be perfectly happy with 100 FPS but marginally worse image quality. Not everyone is as sweaty about static image quality as some of us in the enthusiast crowd are.

    There’s some fair points here about RT (though I find exclusively using path tracing for RT performance testing a little disingenuous given the performance gap), but if RT performance is the main complaint then why is the sub-heading “DLSS is, and always was, snake oil”?


    obligatory: disagreeing with some of the author’s points is not the same as saying “Nvidia is great”



  • I think you’ve tilted slightly too far towards cynicism here, though “it might not be as ‘fair’ as you think” is probably also still largely true for people that don’t look into it too hard. Part of my perspective is coming from this random video I watched not long ago which is basically an extended review of the Fairphone 5 that also looks at the “fair” aspect of things.

    Misc points:

    • In targeting Scope 2 emissions they went with renewables to get down to 0 Scope 2 emissions. (p13)
    • In targeting Scope 3 emissions they rejigged their transportation a little (ocean freight instead of flying, it sounds like?) to reduce emissions there. (p14)
    • In targeting Scope 3 emissions they used an unspecified level of renewable energy in late manufacturing with modest claimed emissions reductions. (p14)
    • Retired some carbon credits, which, yes, are usually not as great as we would like, but still. (p14)
    • They may have some impact by choice of supplier even when they don’t necessarily directly spend extra cash on e.g., higher worker payments.
    • They may have some impact by engaging with suppliers. They provide small-scale examples of conducting worker satisfaction surveys via independent third party which seemed to provide some concrete improvements (p30) and “supporting” another supplier in “implementing best practices for a worker-management safety committee” (p30).
    • They’re reducing exposure to hazardous chemicals in final assembly, and according to them they are “the first company to start eliminating CEPN’s second round priority chemicals” (p31). I don’t know much about this.
    • With partners, they “organize school competitions in which children are educated about […] e-waste” (p40).
    • They’re “building local recycling capacity” in Ghana by “collaborating” with recycling companies (p40).
    • Extremely high repairability (with modest costs for replacement parts that make it financially sensible to repair instead of replace) keeps more phones in use, reducing all the bad parts of having to manufacture brand new phones.
    • The ICs make up a huge portion of the environmental costs of the phone (both with the FP4 (pp 40-41) and with the FP5 (p10)), and Fairphone isn’t big enough to get behemoth chip manufacturers to change their processes (though apparently they’re lobbying Qualcomm for socketable designs, as unlikely as that is to happen any time soon). If you accept the premise that for around half of the phone they have almost no impact on in terms of the manufacturing side, it makes their efforts on the rest a bit better, I guess?

    So yes, they are a long way from selling “100% fair” phones, but it seems like they’re inching the needle a bit more than your summary suggests, and that’s not nothing. It feels like you’ve skipped over lots of small-yet-positive things which are not simply “low economy of scale manufacturing” efforts.





  • So they literally agree not using an LLM would increase your framerate.

    Well, yes, but the point is that at the time that you’re using the tool you don’t need your frame rate maxed out anyway (the alternative would probably be alt-tabbing, where again you wouldn’t need your frame rate maxed out), so that downside seems kind of moot.

    Also what would the machine know that the Internet couldn‘t answer as or more quickly while using fewer resources anyway?

    If you include the user’s time as a resource, it sounds like it could potentially do a pretty good job of explaining, surfacing, and modifying game and system settings, particularly to less technical users.

    For how well it works in practice, we’ll have to test it ourselves / wait for independent reviews.