To understand this topic, we must first take a step back.
We’ve been living in the mobile app world for more than a decade now (as of 2023).
We’ve been living the white-bread world of tapping the AppStore or Google Play icons, picking the app we would like to download, and, within seconds, the app would be fully available in our smartphone. The whole process is usually very fast and free of errors.
Now, let’s take a step back in time and go backwards to the pre-smartphone era. Laptops were usually shipped with a CD and DVD drive that would allow you to play songs and movies, burn backup discs and, most importantly, install software.
Before that, things were even more difficult as you can imagine.
In the 1990s, the Internet was not a thing but computers were. There were no viable ways for you to download a piece of software back then.
Software companies relied in physical media like CD-ROMs and Floppy Discs to ship their software to individuals and companies. To avoid piracy, a user would generally have to type a serial number or code to activate the software – I did that many times in the 2000s – 2010s.
As of 2023, you’re able to download and install a 60 GB game within minutes with a fiber connection.
In 2000, download rates in the developed world would reach 100 kilobits per second. At a transfer rate of 100 kilobits per second, you would get 12.5 kilobytes per second (KB). 1 Byte = 8 bits, therefore 100 / 8 = 12.5 KB.
Say you wanted to download a 1 GB file back in the 12.5 KBPS world – it would take around 22 hours to complete. Do you think that was a thing? It wasn’t, that’s why CD-ROMs were so widespread.
The world relies in legacy software, whether we (software engineers) like it or not.
Banks, governments, airlines, large retailers, healthcare, insurance, logistics rely in mainframes with 40+ year-old pieces of software. They run on modern mainframe machines and their maintainers have little or no plans of porting all the legacy code bases to newer, LTS frameworks.
Porting software means rewriting all the legacy pieces of software into newer technologies. Consider a national bank: how many functions would have to be rewritten? Consider the Internal Revenue System (IRS): how many years would it take to re-design, code, test and ship the entire system in a newer technology? Possibly, during that timeframe, the “newer technology” would become legacy (laughs).
Why port old software if it’s very reliable?
Why port old software if it’s still maintaining a profitable business working?
Why port old software if we can run it in modern and reliable hardware?
The software industry needs to enhance its practices to better work with legacy software without ruining it. I think Java is a technology that was meant to solve this challenge – it’s hard to find another language that beats Java in terms of legacy compatibility.
It’s easier to find reasons not to port legacy software than to find reasons to port it. There are multiple ways to build newer layers of applications in modern frameworks (even cloud based) interacting with mainframes and legacy code. It’s a feasible practice employed by major market players.
One of the key features of Java is its “write once, run anywhere” capability, which means that Java code can be compiled into a bytecode format that can run on any platform with a Java Virtual Machine (JVM). This makes Java a highly portable language, and applications written in Java can run on a wide range of operating systems and hardware platforms without the need for any changes to the code.
Java’s compatibility is further enhanced by its robust backward compatibility. This means that newer versions of Java are designed to be able to run code written in older versions of the language without any modifications, as long as the code doesn’t use deprecated or removed features.
No more dumb apps
We cannot ship dumb technology anymore. We cannot even think of building a dumb app – it’s either already done or it will be easily surpassed by competitors.
It’s hard to say and digest, but a dumb application may be challenging to comprehend and utilize, as it lacks intelligent features and automation capabilities, thereby requiring users and stakeholders to invest more effort into learning its functionality.
Microsoft is slowly turning its dumb applications into smarter ones. Let’s think of Microsoft Word: back when it was introduced in 1983, it’s made its small revolution. Forty years later and counting, Word is still a text editor and there’s nothing new about that. Integrations here and there, fully inserted into Office and OneDrive… but it’s still a text editor.
A few years ago, Microsoft introduced a speech-to-text functionality to Word. You could dictate hundreds of words and the app would assemble your article in almost real time recognition. I’ve tried it multiple times and in different languages and it worked decently well.
In this sense, Microsoft aggregated an artificial intelligence tool to a text editor that we thought had little space for innovation, increasing its competitive advantage of the entire Office suite itself.
Apple has been improving its personal assistant for more than a decade now – Siri. Instead of going to the calendar app or logging into Google Calendar to schedule a meeting or an event, you can simple say: “Hey Siri, schedule a meeting with Tim for tomorrow at 2:30 pm”. Done. Apple is ahead of its competitors by adding intelligence to its operating systems (macOS and iOS and its derivatives – iPadOS and WatchOS).
If you’re taking a walk wearing your Apple Watch with connectivity, you might have full access to a powerful AI-based personal assistant like Siri. No need to type or click icons, just talk to Siri and schedule appointments, reply to messages and email or make calls. Think about it.
The “no dumb app” axiom applies to most industries that rely on innovation as part of their sales engine. We must launch intelligent products where artificial intelligence plays an active role in the core application of the new product.
Think of a productivity app. There are plenty to choose from, apart from the ones already included by default in your operating system (Notes and Reminders on iOS and macOS, for example). You’ll find hundreds of apps, legacy and newer. Some will have better aesthetics, others’ UI will be outdated. But few note taking apps are actually smart enough to help you do things.
I want the app I’m using to understand the sense of urgency that is present in the natural language that I’m writing or speaking. I want it to be able to group tasks in a productive manner based on the main goal of what I’m trying to achieve.
Therefore, to be competitive, we must aggregate the power of an useful set of pre-trained machine learning models and artificial intelligence algoritms to our software. Also, build custom models for each user and organization that act as the stakeholder. This is the next frontier of the software industry.
For instance, apps would learn from my behavior and from all the data I’m sharing and predict my needs and decisions. They would organize most data flows in a way that would save me an absurd amount of time. Generally speaking, the new software frontier would increase human productivity as a whole.