![]() ![]() System 2 with 4-core Intel(R) i3(TM) processor, 4GB RAM, 1TB 5,400rpm mechanical hard disk, and Intel(R) HD 440 graphics, running Ubuntu 20.04 LTS.System 1 with 4-core/8-thread Intel(R) i5(TM) processor, 16GB RAM, 500GB SSD, and Intel(R) UHD 620 graphics, running Kubuntu 18.04.We believe this offers a good mix of hardware and software, allowing us a broader understanding of our work. We decided to conduct the testing on a range of systems (2015-2020 laptop models), including HDD, SSD and NVMe storage, Intel and Nvidia graphics, as well as several operating systems, including Kubuntu 18.04, Ubuntu 20.04 LTS, Ubuntu 20.10 (pre-release at the time of writing), and Fedora 32 Workstation (just before Fedora 33 release). Three, it is not part of any specific Linux desktop environment, which makes the testing independent and accurate.įor comparison, the XZ-compressed snap weighs ~150 MB, whereas the one using the LZO compression is ~250 MB in size. Two, Chromium is a relatively large and complex application. One, the browser is a ubiquitous (and popular) application, with frequent usage, so any potential slowness is likely to be noticeable. We believe this is a highly representative case, for several reasons. The LZO algorithm was selected because it offers the highest level of compatibility over a number of different use cases.Īs a test case, we chose the Chromium browser (stable build, 85.X). To improve startup times, we decided to test a different algorithm – LZO – which offers lesser compression, but needs less processing power to complete the action. Subsequent launches are fast and typically, there’s little to no difference compared to traditionally packaged applications. This is also far more noticeable on first launch only, before the application data is cached in memory. On the desktops, users may perceive this as a “slowness” – the time it takes for the application to launch. This results in a high level of compression but consequently requires more processing power to uncompress and expand the filesystem for use. Now, we want to tell you about another major milestone – the use of a new compression algorithm for snaps offers 2-3x improvement in application startup times! LZO and XZ algorithmsīy default, snaps are packaged as a compressed, read-only squashfs filesystem using the XZ algorithm. Last year, we talked about improved snap startup times following fontconfig cache optimization. We are well aware of this phenomenon, and we have invested significant effort and time in resolving any speed gaps, while keeping security in mind. Snaps are self-contained applications, with layered security, and as a result, sometimes, they may have reduced perceived performance compared to those same applications offered via traditional Linux packaging mechanisms. A great user experience is one that manages to blend the two in a way that does not compromise on robust, solid foundations of security on one hand, and a fast, responsive software interaction on the other. ![]() I actually tried 1 to 100 and none worked. I did not find out where the 36 came from.ģ6 definitely did NOT work with the my file. ![]() I've been reading but I could not follow a lot of it. Plus 36 bytes for overhead (found by comparing the size to hd |grep 7zXZ) Specifically the part about the overhead of 36 and how he got it. Since the above question wants the last line of the file, I then pipe that through tail -n1: SIZE=$(xz -verbose -list .xz |awk 'END ') Fetch that block using tail -c and pipe that through xz. If you want the last block, you need its compressed size (column 5) plus 36 bytes for overhead (found by comparing the size to hd |grep 7zXZ). You can get the list of block offsets with xz -verbose -list FILE.xz. xz files are organised into blocks and it is possible to decompress specific blocks, if you can find the right starting position and length of the file to take. xz files are for example 6GB compressed and 60GB uncompressed, using simple commands like xzcat | tail -1 to simply look at the last line of the uncompressed file, you'd have to wait many minutes for the entire file to get decompressed.įrom reading, my understanding is that. My goal is to be able to reduce time needed to look at specific sections from the middle of very large log files compressed to. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |