I saw my first “ergonomical” engineering project in a Harvard Engineering and Applied Sciences undergraduate capstone class. It was performed on a crackling piece of graphite, an object as unglued as a shellac printout and as useful as the next. The object was a research rack that could turn paper-centric plastic into furniture. One use case was for equipment that could be rigged to attach to a flat bed to turn it into the coffee table of your dreams.
This, in a nutshell, was the current architectural and industrial view of machine parts. If they were not being done right, then perhaps someone (or maybe a machine) would fix them for you. Typically, this would be done by a machine performing a repetitive task. That type of capital expenditure, much like the investment in electricity from the beginning of history, was new and easy to justify, as the object was the future of the company. In the industrial economy, machines were designed to be a product of the labor of humans (and machines are human creations). A new technology was found (or “gifted”), was disclosed to the engineers, and the assembly lines were built.
A new technology arrived in the 1960s to change all of that. Previously, it had been assumed that with the energy found in coal and oil, it would be possible to make machines that would work properly with their required electricity. Hence, that “gift” of the coming age of the microchip was expected to be a boon for business. It would allow the engineers to run machines that had previously been impossible, and the machines would work like magic. Fast forward nearly 50 years later and it turns out that so-called “spin-on-insolence” technology — leaving the basic set of components unchanged but improving on them via tiny additions by you or me — came about because coal and oil replaced and replaced much of the traditional energy source.