Why do we install applications?

70

Yo, that’s the question I’ve been thinking about. I don’t know all the technicalities so that’s why I wanted to write something about it.

Windows has portable exe files. Mac has dmg files which are mounted once and the application is dragged to the Applications folder. Linux has AppImage which is similar to these.

I believe both exe and AppImage could act like a Mac equivalent. Depending on how the application is configured it could set up a user profile when launched for the first time. I don’t think that counts as installation.

I’m not talking about applications such as WindowBlinds or Groupy which require deep system integration. I’m talking about ordinary applications.

I have been using LibreOffice with extensions and dictionaries as AppImage on Linux and that experience was the same as using an installed version, but without the system integration.

I believe Apple figured this out long ago and you get complete system integration by just dragging a file to the Applications folder.

It’s almost like I want to get a Mac just to use this “magic” myself.

Both Windows and Linux are good at spreading files all over the system and I’m thinking why not do it like macOS?

One file (archive) = one application. No need to pollute the system with files everywhere.

Comments (70)

70 responses to “Why do we install applications?”

  1. jimchamplin

    Most Mac apps don’t do a lot aside from register a service, which can call the application binary, or a secondary binary inside the .app bundle.


    I think even draggable .app bundles have a way to script the installation of a helper.

  2. skane2600

    Well, it depends on the application. I've seen Mac applications that require more than just dragging to the Applications folder and I've seen Windows applications that can be dragged wherever desired and run immediately. Obviously applications either have to copied from external storage or the Internet. Does a Mac application have to be dragged the the Applications folder to work or is it just a convention? I don't know.

  3. lambert369

    Hello,

    I am using ubuntu since my windows was so slow. I installed the ubuntu in my pc and started using it. But, I am in a trouble because i can't use any application here like ms office and othere stuffs which i was using in windows. What is the solution? Mobdro

  4. bracesbox

    Shared memory, I presume. A DLL can be placed in memory and called by numerous applications. With the coming of page records, I get it's less necessary.... in any case, it is valuable when outlining things and having a characterized library that you don't arrange to such an braces box extent. Be that as it may, the extent that contaminating the framework with records all over the place, you for the most part can put all your fundamental DLLs in the application's organizer (in any event in Windows). All my .NET applications do that aside from the .NET system.

  5. Lauren Glenn

    I went on vacation and had bad wifi at the hotel. If I could connect, I had to use my phone as a Hotspot for better speed. Why waste that bandwidth downloading programs when I can have them on my large hard drive? It's like when people ask me why I use my Zune 64 or my iPod Classic. Tell you what.... Drive a far distance and go into an area with bad coverage. Your music will cut out often.... But on an offline device, that never happened.

    • longhorn

      In reply to alissa914:

      I think you misunderstood. The question is why spread files all over the system like Windows installers and Linux debs/rpms when you can have one single self-contained binary like Mac (the .app within the .dmg file).


      It has nothing to do with online vs offline. It's about having a clean base system that can't be corrupted by applications.


  6. adam.mt

    In reply to F4IL:

    If you google it then loads of people have done the experimemt! The size of the Windows registry makes little difference. Famously Mark Russinovich (MS senior tech, former SysInternals) confirmed this.


    If you like "snake oil" or are super OCD then clean the registry, otherwise leave well alone!


    Just one link:

    https://www.alienvault.com/blogs/security-essentials/should-windows-users-beware-of-registry-fixers
  7. Lauren Glenn

    Shared memory, I guess. A DLL can be put in memory and called by multiple applications. With the advent of page files, I guess it's less necessary.... but it is useful when designing things and having a defined library that you don't compile as much. But as far as polluting the system with files everywhere, you generally can put all your necessary DLLs in the application's folder (at least in Windows). All my .NET applications do that except for the .NET framework.

  8. jimchamplin

    I'm seeing a lot of confusion here. I'd like to help clear it up.


    First off, .dmg is a disk image file. It mounts and contains what is required to either run or install an application. Applications come in one of two kinds of bundles.


    The most common is the .app bundle. This is a directory, say, "Text Edit.app" that the Finder presents to the user visually as one object. Right-click an application and select "Show package contents" and wow, look. Just a folder tree. It's not even compressed. You can navigate through them from the command line.


    There are also installer bundles. These load using the macOS Installer. Typically this is used when the application needs to install system extensions. Perhaps for device drivers in the form of .kext (kernel extension) bundles. That's mainly what I see them for.


    If an application that comes in the form a .app bundle will perform any additional setup it requires upon first run. That's why you'll occasionally see it request admin access the first time you run it. It can install a helper, that is a small utility that performs a tertiary task like a menubar widget. If the application surfaces a system service (Application menu > Services) that happens automatically, again on first run.


    The process is far simpler than everyone is making it out to be. Remember, this is from 1987. On machines that used a slow magneto-optical drive. That's why it's so quick now as to be seamless.

  9. SWCetacean

    I've used macOS for a bit since a professor that I worked for had an all-Mac lab, so I have some experience in the whole drag the .dmg into the Applications folder process. As far as I can tell, macOS monitors the Applications folder and when a new .dmg file is added to that folder, the OS in the background will read the metadata and add the app to its list of apps and execute any scripts that the app needs to install itself in the system. So I don't think that Mac .dmg files don't get installed; they do get installed to receive the system integration that it gets. The difference is that Mac handles the installation behind the scenes, and .dmg apps are built in such a way that they don't require user input to install.


    You get the benefits of ease of use, but you lose the ability for user-customized settings. I don't think there's a way to do the Mac drag-and-drop install on any location that's not the Applications folder (maybe to the per-user ~/Applications folder?). You can't just put the .dmg file anywhere and expect MacOS to think it's installed. On my Windows 10 desktop, I have 3 different disks, and I install different programs to each disk. For example, my system utilities and tools get installed to my HDD, while Windows and my high-priority games are installed to the fastest SSD, and other apps are installed to my capacity SSD. That would not be possible without some gnarly symlinks if I only had a single Applications folder where programs are installed.


    Another reason to split files across multiple location is for per-user settings. I don't know how macOS handles per-user settings for a single program, but different users might not have the same filesystem permissions, so the user configurations are stored in a different location that is guaranteed to be accessible by the specific user.

    • jimchamplin

      In reply to SWCetacean:

      .dmg files are disk images. .app bundles are applications. Where did this confusion come from!?


      It can indeed run from anywhere and macOS doesn’t care. The bundle can can contain a script that will install a helper application if needed, but the system can integrate any provided services without any “installation”. “Services” means something special in macOS, that’s not a general term, and there’s an API for surfacing services to the system.


      User settings are in ~/Library/Preferences in the form of .plist files. It doesn’t matter what volume they’re on. UNIX permissions don’t matter since macOS manages permissions for system and user files automatically. macOS really does t care much about what the *nix bits are doing. They’re just there to provide a userland.

    • hrlngrv

      In reply to SWCetacean:

      . . . I don't know how macOS handles per-user settings for a single program . . .

      Perhaps macOS doesn't reinvent the Unix wheel and uses dot-files in users' home directories to store configuration for particular programs. I'm pretty sure macOS's Terminal app processes my ~/.bashrc.

  10. Martin Pelletier

    I guess what you are talking about is like how Windows Store works. Also depends of the dev doing the installation package. You can have an self contained exe that install the software. But like you wrote, some installation put files in many places that get orphaned when you uninstall the software.

    • longhorn

      In reply to MartinusV2:

      I mostly wanted to highlight that when you move a Mac application from the Applications folder to the trash can then it gets "uninstalled". Pretty simple and straightforward concept if you ask me. Nothing installed = nothing to uninstall.


      Application stores are big on mobile, but haven't taken off on desktop OSes yet. I doubt they will unless other methods are taken away. There is too much money to be lost (30 %) and there are also restrictions that don't apply if you distribute directly to your users.


      I don't know how AppX packages work, but it's still something that you "install" even if you do it outside the Store. It would be cool if you could uninstall an AppX application just by deleting a file, but I don't think that is possible.


      • wright_is

        In reply to longhorn:

        Except it isn't a file, it is a folder and dragging it to the trash is equivalent to clicking uninstall on Windows or apt-get -remove or yum etc. on Linux. I remember some applications on the Mac had install and de-install routines, it wasn't just copying stuff over, scripts ran and made other modifications, which had to be undone, hen the application was removed. MS Office was, ironically, one such application.

        If you want encapsulation of executable images, then look at Docker and Kubernetes.

        What you don't get with portable applications, whether they be on Windows, Linux or Mac is any integration. Not a problem for simple applications, a big problem for something that fits into a workflow or automation.

        For example, a portable application can't register itself as a default, so you can't double click on a data file and have it open in the application, you have to open the application and then open the data file. Not a big problem for experienced users, but still conveluted.

        Similarly, it can't register itself and its components, so you can't use them for automation purposes in other applications, embedding objects or calling registered APIs to perform certain functions. Again, for a simple viewer application or Paint replacement, not a problem, but for Adobe Suite, MS Office / LibreOffice, a big problem.

        It also works the other way around, if you have a portable application, adding add-ins is more complex, because the program isn't registered and, if the portable application is on a read-only partition, you need to re-register the add-ins every time you start the application.

        Then there is product registration lincensing. Do you really want to have to enter the serial number and wait for the application to register itself online every time you start it? That obviously is less of a problem with open source software.

        To take your argument one step further, why bother with portable applications? Why not just used WebApps? Zero footprint.

        • longhorn

          In reply to wright_is:

          "Except it isn't a file, it is a folder and dragging it to the trash is equivalent to clicking uninstall on Windows or apt-get -remove or yum etc. on Linux."


          No, it's very different from your average software package. It's a self-contained format. It doesn't extract itself to the file system. It may set up its own profile though so you can customize it to your heart's content. And you do have write permission so no need to worry about all those things you brought up.


          I never said: "We should never install applications". I just wanted a thought provoking title "Why do we install applications?". It seems that the title was more provoking than I intended.


          • hrlngrv

            In reply to longhorn:

            . . . And you do have write permission . . .

            Do you really mean that? To the contents of .dmg files when mounted when the application is running? So user processes could alter anything in the .dmg file? Would you call that secure?



            • longhorn

              In reply to hrlngrv:

              You do have write permission to the user profile which some AppImages choose to create. If no user profile is created then the AppImage will use an existing user profile, which means that a Firefox AppImage will write to your standard Firefox user profile. So all your extensions and browsing history and bookmarks will be used by the Firefox AppImage in that case. It's much better if an AppImage sets up its own user profile. That way you can have LibreOffice 4, 5 and 6 running simultaneously and apply different customizations to each of them.


              As for macOS: The dmg container is only mounted once to access the .app and drag it to Applications (or another place). I assume macOS applications have user profiles just as Linux so obviously you only have write permission to application user profiles which are owned by you (the user that is logged in).


              There is really no difference in write permission just because the application is self-contained (not installed/extracted) because write permission is always to the application's user profile and not to the application itself.


        • skane2600

          In reply to wright_is:

          Of course web apps have the same integration issues as portable apps, but at least the latter can be run without an Internet connection. Admittedly the lack of an Internet connection isn't that big of a problem these days, but "footprint" is even less of a problem.

  11. hrlngrv

    I figure one reason is interprocess scripting/automation. Under Windows and Linux, one application can run and control another application. I believe standard software installation including putting a lot under HKCR in the Windows registry and equivalent CORBA configuration for Linux would be necessary to support this.

    OTOH, if you're just installing XYZ Bitmap Editor as an alternative to Paint, portable .EXEs make a lot more sense to me.

    As for spreading files out everywhere, using standard tools, does Windows put installed software files anywhere other than in an application's own directory under C:\Program Files, maybe some system integration files under C:\Windows, and maybe some system/all users configuration under C:\ProgramData. If Windows could be split between read-only and read-write partitions, this would make more sense. As for Linux, the FHS specifies where files should go: either system/all users configuration goes under /etc, and all other files go under /usr/bin, /usr/lib and /usr/share or /usr/local counterparts, or everything goes under the application's own directory under /opt. Essentially, Linux software under /opt is equivalent to Windows software under C:\Program Files. /opt was meant for commercial software with restrictive licensing. The point of this being that /opt and /usr could be on the network rather than on every local computer, and given the magic of mounting under Linux, /usr could be remote while /usr/local was, in fact, on local computers.

    As for Apple, I don't use Macs, so I have limited first-hand knowledge about how they work. That said, while installing a software package from a user's perspective may be as simple as dragging a .dmg file into the Applications folder, macOS is an object-oriented system, so that may actually trigger a script in the .dmg file which unpacks the .dmg file and installs possibly thousands of files all over the Mac's file system. I kinda doubt that if you install Chrome from a .dmg file on a Mac that the result is a single immense executable file in the Applications folder.

    • longhorn

      In reply to hrlngrv:

      "As for spreading files out everywhere, using standard tools, does Windows put installed software files anywhere other than in an application's own directory under C:Program Files, maybe some system integration files under C:Windows, and maybe some system/all users configuration under C:ProgramData. If Windows could be split between read-only and read-write partitions, this would make more sense. As for Linux, the FHS specifies where files should go: either system/all users configuration goes under /etc, and all other files go under /usr/bin, /usr/lib and /usr/share or /usr/local counterparts, or everything goes under the application's own directory under /opt. Essentially, Linux software under /opt is equivalent to Windows software under C:Program Files. /opt was meant for commercial software with restrictive licensing."


      Good summary! As you see yourself Linux is the worst when it comes to keeping application files together.

      • F4IL

        In reply to longhorn:

        The fact that linux and unix-like systems traditionally do not dump files in one place is actually a benefit that stems from thoughtful engineering.


        This avoids one of the biggest issues related to performance and maintenance called the registry. In fact, the registry is so complex and fragile that most installer scripts only create entries and never delete, out of fear of leaving the system in an unusable state.

        • skane2600

          In reply to F4IL:

          The registry allows integration capabilities that AFAIK aren't supported by Linux in a standard way. I seriously doubt that most installers never delete registry entries. Deleting entries is no more of a danger to the state of the system than creating them.

          • F4IL

            In reply to skane2600:

            Integration capabilities are supported in a standard way for application interop (dbus) and service management (systemd).


            Unfortunately installers do whatever they feel like. In many cases they delete entries while in others they don't. Unlike linux, this leaves garbage that progressively slow down the system. In fact, simply installing an application like visual studio, creates a zillion registry entries which result in a permanent system slowdown. Like any problem of course, the registry gave birth to another. Registry cleaners (ccleaner, registry mechanic, etc) and various system optimizers.

            • skane2600

              In reply to F4IL:

              The advantage of the registry is that it provides minimal coupling between programs. The code reading or modifying the other code's entries doesn't need to know any entry points or other internal details.


              The idea that registry left-overs are the cause of performance degradation is more a matter of conjecture than proof and registry cleaner makers take advantage of some people's belief in this unproven theory.


              It really shouldn't be that hard to test the theory. Take a new installation of Windows, measure performance and then write a lot of data to the registry and test again. That would isolate the effect of registry entries from other factors such as more background processes being added as more applications are installed.

              • F4IL

                In reply to skane2600:

                The same goes for dbus, applications don't need to know anything about internal details, that's the point.


                As far as the registry is concerned, it is not people's belief in an unproven theory. People reinstall and reset windows because their systems slowly but surely degrade. This has been a chronic issue with windows and is the real world effect of bad design.


                Performance degradation and bugs are not exclusively inherent to registry left-overs. For example: Installing an application (e.g. lightroom) adds hundreds of entries in an already bloated structure. If you decide you want to run notepad (or anything), windows will have to search through the registry to find the settings for notepad. The problem is that the registry is now bigger (because you installed lightroom) and as a result, more time consuming to search through. This translates to a never-ending performance sapping loop that can only be resolved by reinstalling - resetting windows.

                • skane2600

                  In reply to F4IL:

                  What matters is the scale of the added time. Does a change of registry size result in a perceptible change in performance? Reinstalling Windows wouldn't prove the registry was the cause of the slow-down since the other factors would start at ground-zero too.

                • F4IL

                  In reply to skane2600: What matters is the scale of the added time. Does a change of registry size result in a perceptible change in performance?


                  Since it is cumulative It eventually does. With a given search algorithm, on a given system, searching through a structure with x entries is faster than searching through a structure with 2x entries.


                  In reply to skane2600: Reinstalling Windows wouldn't prove the registry was the cause of the slow-down since the other factors would start at ground-zero too.


                  It wouldn't disprove it either. This means the registry is still suspect for the inevitable performance degradation that will lead to a system re-installation.


                  This is one of the reasons why thoughtful design and careful engineering would have excluded solutions like the registry in favor of the proven unix based configuration files.

                • skane2600

                  In reply to F4IL:

                  We don't know what "eventually" amounts to. Given that registry entries generally increase with added applications, disk space might be exhausted before significant performance is hampered by the registry. Not saying that's the case, but it might be.


                  I see you prefer the UNIX approach but your opinion isn't proof that the design used in UNIX represents more "thoughtful design and careful engineering" than that of Windows. This is a debate that has been going on for decades and there will never be a consensus on a conclusion. Both OS's have issues that relate to the era they were born in.

                • F4IL

                  In reply to skane2600:

                  My opinion only echoes the known problem that increasing the registry size results in a slower system. On a given system, searching through a small number of entries is faster than searching through a higher number of entries. This is outlined in university courses on algorithms and data structures. This is fact, not my opinion.


                  The consensus is reached every time someone blows away and re-installs windows because he is experiencing the performance degradation. This has been going on for decades and doesn't seem to be going away any time soon, even though we have been iterating through increasingly more powerful hardware.

                • skane2600

                  In reply to F4IL:

                  You're trying to equivocate between the fact that searching takes longer with more entries with the conclusion that the number of entries in the registry is the reason that Windows slows down significantly enough to be noticeable . Only the former is "outlined in university courses" but not the latter.


                  "The consensus is reached every time someone blows away and re-installs windows because he is experiencing the performance degradation. "


                  So you're just going to ignore other factors? Are universities now teaching that increasing the number of background processes don't consume more CPU time?


                  Perhaps some day someone will perform the experiment I proposed and we can replace our speculations with actual facts, but I guess I'm done arguing about this.

                • F4IL

                  In reply to skane2600: Perhaps some day someone will perform the experiment I proposed and we can replace our speculations with actual facts, but I guess I'm done arguing about this.


                  Well speaking for myself, my positioning is not speculative but eventually someone might care enough to factually disprove the extensive criticisms surrounding the registry.

                  It will no doubt be an interesting read.

          • hrlngrv

            In reply to skane2600:

            The registry allows integration capabilities that AFAIK aren't supported by Linux in a standard way. . . .

            Depends how you define standard way. I'd argue CORBA is an analogous standard.

            As for cleaning the registry on uninstall, I've seen more than a few instances of uninstalled software still lying around as Open with subkeys. Perhaps that's because those were user-specific, so under HKCU, and the uninstaller only cleans HKLM.


      • hrlngrv

        In reply to longhorn:

        . . . Linux is the worst when it comes to keeping application files together.

        Not exactly. In many cases binary executables are stored under /usr/lib or /usr/local/lib and symlinks are stored under /usr/bin or /usr/local/bin. This means the Linux PATH environment variable tends to have fewer directories in it than most Windows users' PATH variables. FWIW, most applications' supporting but not directly (or even) executable files are stored under application-specific subdirectories under /usr/lib or /usr/local/lib.

        Anyway, the main point of the FHS is to make it easier to put common software (/opt and /usr) on network servers available to many users and only put machine-specific software (/usr/local) on individual machines along with the software in /bin, /lib and /sbin, which are necessary either to boot a machine to run level 3 or diagnose why a machine couldn't make it to run level 3. Again, most application-specific files are stored under application-specific subdirectories under /usr/lib or /usr/local/lib with only their top-level executables or symlinks to top-level executables stored in /usr/bin or /usr/local/bin.

        Linux is optimizing for different things than Windows. I'd hope the security advantages of having /opt, /usr, and /usr/local mounted read-only, something Windows today can't do with C:\Program Files, would be obvious.

        Besides, if one's using a package manager, what does it matter where each application stores its files if standard users couldn't store data files in those locations?

    • longhorn

      In reply to hrlngrv:

      "I kinda doubt that if you install Chrome from a .dmg file on a Mac that the result is a single immense executable file in the Applications folder."


      I do believe Mac applications consist of a single file (archive). Not all Mac applications, but those that are installed by dragging them into the Applications folder. The helper scripts are just there to create shortcuts for launchers/Launchpad and create file associations I believe.


      I don't own a Mac either so my understanding of how the Mac handles application install/uninstall is mostly based on the blog post below. It was written by the guy who created the AppImage format based on the Mac .app format that can be found inside the .dmg file. The .dmg file itself is pretty useless I believe, a container.


      It's an interesting read for anyone who prefers technical simplicity:

      (link) Sorry, it seems impossible to post links.


      There is also an excellent quote by Steve Jobs:

      "Most people make the mistake of thinking design is what it looks like. People think it’s this veneer — that the designers are handed this box and told, “Make it look good!” That’s not what we think design is. It’s not just what it looks like and feels like. Design is how it works."


      I wish Apple would apply this thinking to Mac hardware. :)


      • skane2600

        In reply to longhorn:

        Kind of a strange thing for Jobs to say given how Apple's priority has almost always been form over functionality.

      • jimchamplin

        In reply to longhorn:

        The .app bundle is an uncompressed folder that the Finder shows to the user as a single object. Nothing special.

        • Paul Thurrott

          In reply to jimchamplin:

          Well. It is special, when you think about it.


          We have app containers in Windows too. But the closest we've come to the drag and drop install/uninstall model on the Mac is Store apps and .appx/whatever they're about to change to bundles. The notions that an entire app and its supporting files can be contained in a single ZIP-like file and that the system can then shield this complexity from the user is smart.

          • jimchamplin

            In reply to paul-thurrott:

            Well, yeah, it is special in that regard. As far as I know, only the classic Mac OS, Amiga Workbench, and Be OS ever worked as seamlessly.

          • hrlngrv

            In reply to paul-thurrott:

            . . . We have app containers in Windows too. . . .

            If you're referring to Mac .dmg files, I've read up a bit on them. They're disk image files which are mounted as file systems. This does contain everything, but is there an effective difference, from a Unix-like perspective, between ad hoc file systems in the Applications folder and the same files in the same hierarchy in a specific subdirectory under /opt on a Linux system?

            From a broader Unix-like perspective, does this mean if, say, an application uses lua as its own scripting language, then it uses the lua subsystem included in its .dmg file rather than a systemwide lua subsystem available to all software installed the traditional way? If so, you're trading containment for redundancy, and that's not always smart.

            • longhorn

              In reply to hrlngrv:

              "From a broader Unix-like perspective, does this mean if, say, an application uses lua as its own scripting language, then it uses the lua subsystem included in its .dmg file rather than a systemwide lua subsystem available to all software installed the traditional way? If so, you're trading containment for redundancy, and that's not always smart."


              With AppImages you can bundle anything you want. You don't bundle stuff that are already present on the target system. If you target a wide range of distros, let's say all mainstream Linux distros from 2010 and newer, then you have to bundle some things which are not present or new enough in older distros. The official LibreOffice AppImages do that. And size is still small. This is great, I thought, and then I learned about the Mac origin and that macOS even offers system integration despite the fact that it is a self-contained application which isn't installed.


              • hrlngrv

                In reply to longhorn:

                I chose lua intentionally because it's not standard on Macs, but it could be installed so available systemwide.

                Size is still small is subjective.

                The Linux approach minimizes redundancy. The Mac .dmg approach would seem to ignore redundancy.

            • jimchamplin

              In reply to hrlngrv:

              .dmg files don't go in the Applications folder. The disk image simply contains the application or its installer bundle.


              A .app bundle is nothing more than a folder tree that the Finder obfuscates visually. Gonna make a large post about this.

              • hrlngrv

                In reply to jimchamplin:

                Thanks. I know little about Macs and macOS other than it has a Mach kernel and some Unix-like features. I know how things work under Windows and Linux, and portable Windows software isn't like either .dmg or .app files for Macs. They sound more like snaps for Linux.

                • jimchamplin

                  In reply to hrlngrv:

                  The *nix bits aren’t part of what makes macOS what it is. It’s a big point of confusion. Apple’s *nix is Darwin, which macOS, iOS, watchOS, and tvOS run on. Each is a specialized derivation of NeXT technologies which Apple bought in 1997.


                  And that’s a good way to think. Snap packages are similar in quite a few ways, actually.

                • hrlngrv

                  In reply to jimchamplin:

                  By some Unix-like features I meant Terminal and the POSIX commandline tools. I realize macOS is a lot more than that and the kernel.

        • longhorn

          In reply to jimchamplin:

          It is special in the same way a portable exe file is. It's one archive that holds the entire application. You can run it as is, no installation/extraction required.


          What is special about the Mac is that not only is the application bundled in one file, but also all the system integration; Launchpad integration and file associations.


          My understanding is that when the .app is dragged into the Applications folder macOS reads the files inside and adds launcher and file association info to a database. No files are moved. That's the impressive part - no clutter.


          • hrlngrv

            In reply to longhorn:

            It is special in the same way a portable exe file is. It's one archive that holds the entire application. You can run it as is, no installation/extraction required. . . .

            Incorrect. Portable Windows software is distributed either as .zip files or as .exe self-extracting wrappers around .zip file contents. You have to unzip the .zip files into a directory, and the result is a lot of subdirectories under that directory with all the usual supporting files.



          • jimchamplin

            In reply to longhorn:

            It is impressive. It’s also a 30 year old technology, so the fact that nobody else has done it that way is impressive in a negative way!

          • F4IL

            In reply to longhorn:

            Shipping the entire application in a single binary is not really an achievement. It's just something the industry has decided against (static linking) due to memory requirements and security issues.

            • skane2600

              In reply to F4IL:

              I would say using multiple binaries is quite an old idea driven by the much more limited resources of the era they originated from. The idea was that binary code could be shared across multiple applications thus saving disk space and that dlls could be loaded and unloaded from memory as needed to save RAM. Limitations of storage and RAM have been greatly reduced since then so the case is a lot weaker than it once was. It's really just a form of optimization that almost always involves trade-offs.

              • wright_is

                In reply to skane2600:

                Squandering resources, "because we can" is not a good solution.

                Mac Apps have libraries etc. and install scripts, the average user just doesn't see them.

              • F4IL

                In reply to skane2600:

                I agree for the most part but there are still issues that largely affect security. For example:

                If an application (photoshop, vlc, etc) has been statically linked against vulnerable code, you have to update the application in its entirety (there are no dlls, just a single exe). On the other hand, if the application is linked dynamically (ships dlls), the vendor can atomically update the affected dll.

                • skane2600

                  In reply to F4IL:

                  This seems to be a rather cherry-picked scenario. But if the application exists as a single file, than a single file is all that needs to be updated just as in the dll case. Obviously the single file would probably be bigger, but it's still one file. The answer, of course, is the ever-less-popular practice of doing it right the first time.

                • F4IL

                  In reply to skane2600:

                  I'm afraid that's not the case.

                  If the vulnerability exists in code that is used by 100 applications (for example SSL), then you have to update 100 applications instead of a single affected dll. Without accounting for network speed and system performance (the user may be in the middle of a presentation, working, etc), this is wasteful, dangerous and plain wrong.

                  In essence, this is how not to design an OS.

                • skane2600

                  In reply to F4IL:

                  I thought we were talking about installing applications, not about operating system design. Application programmers should be careful about including code that they haven't performed due diligence on.

                • F4IL

                  In reply to skane2600: I thought we were talking about installing applications, not about operating system design.

                  You thought correctly, installing applications involves creating registry entries. The registry was a design decision and engineering implementation by msft, not third party application programmers.

                • skane2600

                  In reply to F4IL:

                  Not all installation processes involve writing to the registry, but I wasn't claiming that the registry wasn't a MS design. I don't know if the SSL code you were referring to was a standard Windows implementation or that of a third party, but in any case, Windows is not an application nor is it implemented as a single file, so I don't really see it's relevance to this discussion.

                • F4IL

                  In reply to skane2600:

                  Unfortunately, there are installation processes that involve writing to the registry (games, browsers, editors, etc). Trying to avoid the problem (registry) is not the same as solving it.


                  The SSL code i was referring to was an example and serves as a placeholder for shared code as in dll, or shared library (.so). The fact that we still use dynamic linking today, has nothing to do with limited resources. There are significant security and maintenance concerns that cannot be overcome by increasing system resources.

  12. myanime002

    I took some time off and had awful wifi at the inn. In the event that I could interface, I needed to utilize my telephone as a Hotspot for better speed. Why squander that transmission capacity downloading programs when I can have them on my manga online al hard drive? It resembles when individuals ask me for what valid reason I utilize my Zune 64 or my iPod Classic. Let you know what.... Drive a far separation and go into a territory with awful inclusion. Your music will remove often.... However, on a disconnected gadget, that never occurred.

Leave a Reply