• 0 Posts
  • 9 Comments
Joined 11 months ago
cake
Cake day: January 23rd, 2025

help-circle

  • This would be a whole new pipeline to make interactivity work. Emulating a server with cached responses would allow to reuse the JS part of websites and is easier to do. I have no doubt that some pages wouldn’t work and there would be a shitton of security considerations I can’t imagine.


  • This “machine state” definition and manipulation is exactly the hard part of the concept. I’m not saying it can’t be done, but it’s a beast of a problem.

    Our best current solutions are just dumb web crawler bots.

    To me a simple page saving (ctrl+s) integration seems like a most realistic solution.


  • Mr. Satan@lemm.eetoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 months ago

    Ok, so your average site doesn’t download content directly. The initial load is just the framework required to fetch and render the content dynamically.
    Short of just crawling the whole site, there is no real way to know what, when or why a thing is loaded into memory.
    You can’t even be sure that some pages will stay the same after every single refresh.

    Comparing it to saving the state of OS isn’t fair because the state is in one place. On the machine running the code. The difference here is that the state of the website is not in control of the browser and there’s no standard way to access it in a way that would allow what you’re describing.

    Now, again, saving rendered HTML is trivial, but saving the whole state of a dynamic website require a full on web crawler and then not only loading saved pages and scripts, but also emulating the servers to fetch the data rendered.