Low-Code, No-Code, LCNC: The formal definition

Abbreviations

  • UI – User Interface
  • GUI – Graphical User Interface
  • CLI – Command Line Interface
  • SDK – Software Development Kit
  • IDE – Integrated Developer Environment
  • RAD – Rapid Application Development

Executive Summary

This document solidifies and defines the multiple definitions and sub-genres within the Low-Code and No-Code world.

No-Code – A visual software development environment where the user works only within a GUI. There is no programming or CLI involved. The system potentially allows plugins for added functionality.

Low-Code – A visual software development environment using a GUI and/or SDK. In addition to No-Code functionality, the user has the capability to also add programmatic code directly in the environment. The system potentially allows CLI and plugins for added functionality.

Low Code No Code (LCNC) – Low-Code and No-Code as a technology, referring to all or parts of a system.

Pro-Code – Single or multiple systems that are written in pure code, by a programmer in whatever language suits the task best.

Background

Low-Code and No-Code have been around for several years, and the genre was first proposed in the 1982, when James Martin argued in his “Applications Development Without Programmers” book that 4GL technologies (such as RAMIS and FOCUS) “opened up the development environment to a wider population and enable non-programmers to create applications themselves” (Sassi, R.B. (2021). A Brief History Of Low-Code Development. [online] Medium. Available at: https://betterprogramming.pub/low-code-history-b756c095494f). As a response to the Waterfall Model, RAD gained momentum in the 1990s when the concept of visual desktop tools like Visual Basic, Delphi, and Oracle Forms became popular. Low-Code and No-Code gained wider traction in 2016, after a publication by the Forrester Group that made the term Low-Code public.

Since then, Low-Code and No-Code applications have enabled skilled developers to work faster and citizen developers to work on tasks that do not require as much technical knowledge but were previously purely within the remit of developers.

However, the fluidity and misuse of terms and labels describing this technology has caused a lot of confusion. In specific:

  • Formal definitions of the differences between Low-Code and No-Code are sparse and no two people describe them in the same way.
  • Two other terms have appeared recently: LCNC and “Low Code No Code”, both of which appear to define different things.

LCNC has often been used by the Low-Code community to describe how all the platforms have evolved by adding UI for repetitive tasks, i.e. small areas of No-Code to their platforms. Many Low-Code platforms offer No-Code functionality as well, such as Appian, Mendix, Microsoft PowerApps, OutSystems and Salesforce Lightning.

 “Low Code No Code” has been used in the media to refer to all Low-Code and No-Code coding types as a whole (i.e. not Pro-Code). This phrase is mentioned as much, if not more than LCNC.

Solidifying the terms

In theory, the differences between LCNC and “Low Code No Code” are redundant and using an acronym of one term to mean something different to the expanded term leads to a lot of confusion. Therefore, we should have a single definition for LCNC and “Low Code No Code”, and only utilise LCNC as an abbreviation of “Low Code No Code”. Therefore “Low Code No Code” and LCNC should hereon refer to all Low-Code and No-Code coding types. This can apply to the entire application or part of it.

Despite the limited definition in the public domain for No-Code and Low-Code, we can lean on the two words “Low” and “No” to express pure UI drag ‘n’ drop interfaces vs GUI that also allow code (i.e. nearly No-Code).

So No-Code refers to applications that exist 100% in the GUI (drag ‘n’ drop and configuration). 

Low-Code also exists in the GUI, but it also allows the user to define code, either in the GUI, SDK or CLI. Low-Code is different from No-Code in that it is much more flexible, however you need developer skills to use it (though less than traditional Pro-Code, which is the ultimate for flexibility). Low-Code has an advantage over No-Code, in that it is much more flexible than No-Code and can be used by a Junior Developer to move much faster than a Pro-Code approach.

Formal definition

No-Code

A visual software development environment where the user works only within a GUI. There is no programming or CLI involved. The system potentially allows plugins for added functionality.

Low-Code

A visual software development environment using a GUI and/or SDK. In addition to No-Code functionality, the user has the capability to also add programmatic code directly in the environment. The system potentially allows CLI and plugins for added functionality.

Low Code No Code (LCNC)

Low-Code and No-Code as a technology, referring to all or parts of a system.

Pro-Code

A system that is written in pure code, by a programmer in whatever language suits the task best.

Links

ApiOpenStudio Production Docker video

We have released a new video on YouTube, demonstrating the speed and ease of deployment: spinning-up full running API and Admin installations on separate servers in under 30 minutes!

The ApiOpenStudio Production Docker video can be found at https://www.youtube.com/watch?v=iZ_Q81MhXUw.

This is a response to the discovery of an issue with the admin Docker image at the end of last week. The issue was resolved over the weekend and the docker images have been re-published.

ApiOpenStudio Admin production docker fixed

While preparing for yesterday’s seminar, it was found that the naala89/apiopenstudio_admin docker images were broken (the issue can be viewed here).

The good old fashioned “it works on my machine” came back to bite us, but the code has been updated and it is now loomed correctly into the GitLab pipelines. Many thanks to laughing_man77 for jumping on this so quickly!

Problem solved!

We’re pleased to announce that the issue has been resolved, and all tags have been re-uploaded to docker hub. So that was a really quick turn-around of 1 day (and we even had a chance to sleep on it before pushing it live).

Note: the only code that was affected was the https://gitlab.com/apiopenstudio/docker_images/apiopenstudio_admin_docker_prod repository. So if you have an existing checkout of the admin MVP code, this will not affect you.

Apart from the good news that full production images for ApiOpenStudio core and admin are fully working and tested, I also tested the released image on an existing server – and proved that the install time of admin is under 5 minutes!

New Docker images for ApiOpenStudio!

The latest off the hot-press: We have finalised and published the docker creation pipelines. This now means that you will continually have the latest version of ApiOpenStudio Core and Admin images at your fingertips.

The images are very fast and easy to install and completely self-contained (server and all requirements, including Composer dependencies). That means that you do not need to install any infrastructure on a barebones server, except for Docker, config file and SSL certificates. See:

The scripts are configured to generate images from tags or any branches we want to release (this includes develop/master branches and any development branches that we deem necessary for testing).

Full documentation of the installation process is available on our wiki at:

The scripts are available at the GitLab docker repository (Core and Admin), and I’ve also taken a little time out to post a gist of one of the scripts that others may find useful (download a specified branch or tag from a GitLab or GitHub into a location) at https://gist.github.com/naala89.

ApiOpenStudio Introduction Seminar (Rescheduled)

Add to Calendar

ApiOpenStudio will be holding our first Live Event with our Developer/Founder, John

This event has been rescheduled to:

  • Friday 4th of November at at 2pm AEST.

The topics include: an introduction to the product and a run through of all the basics of the technology, and where you save time and money.

Followed by Q&A on the Product and the Business Concept.

No need to register as we will be live on YouTube. However if you would like to register, we are offering a free 90 minute support package to anyone who registers.

To register, fill in the contact us form on the home page.

API First Concepts and Problems

Introduction

APIs are displaying continued astronomical growth (see https://www.postman.com/state-of-api/api-global-growth/#api-global-growth) and are now a critical part of nearly all modern architecture. However, we have seen a lot of coverage recently of the different terms encompassed by the API First methodology, especially as people try to use these buzz-words to generate excitement over their products without fully understanding what they mean.

In IT, terms tend to evolve, due to the pace of change. But this is not the ideal way to create terms as it can lead to similar terms, causing sub-optimal communication, even mistakes. But unlike industries like aviation and maritime, which are the gold standard for communication, w do not set a standard and declaration for what precise terms to use and when, to prevent confusion, misunderstanding and planes and boats crashing into each other.

For instance, we now have the following:

(Yes, I know that some of these are unrelated to APIs, however there is a real problem with language here – everything sounds the same and confuses people)

  • “API First”
  • “Code First”
  • “Data First”
  • “Design First”
  • “API First Design”
  • “Business First API Design”
  • “Api First Culture”
  • “Api First Company”
  • “Api Strategy”
  • “API as a Product”
  • … and the list goes on…

I don’t know about you, but my head is rattling from all of these variants. This is getting ridiculous and incredibly confusing for everyone, requiring a lot of reading of articles to understand the nuances of the new terms that are being introducing and how these may subtly differ from the others.

As technologists, we need the business community to fully grasp the ideas and benefits behind the API first methodology so that we and they can make sound judgements on directions and strategies. This means that the ideas need to be immediately comprehensible and their ramifications obvious.

Rubbish pile
Image: https://www.onmanorama.com/lifestyle/news/2022/08/10/eliminate-impact-single-use-plastic-products.html

I am going to suggest that we throw out a lot of these phrases and keep things simple, but I won’t try to reinvent the wheel and create completely new phrases. For the rest of this article, I will only refer to 3 terms.

API First concepts

Each term has an impact on the technical and business strategies/successes in different measures. But each concept requires a full “buy-in” and understanding from the business.

API First

  • Design the and build the API first!
  • The product is built around the API.
  • This is reflected in the business strategies and revenue models (see API as a product and API design 1st).

API-first is obvious from the business point of view. The API is the primary income generator or essential to other income product/s. Without a revenue model, you do not have a business. Therefore if APIs are the main product or critical to a product that is being created/overhauled, then the API becomes a crucial first-class citizen, not just for the technical dept, but also from the business.

Information and data are key to the modern business, right? APIs are the backbone for nearly all data communication.

Building a new product (not just API) is an expensive task, both in time and money. By going through a full design phase for the API, the correct emphasis and attention is given to the API that it needs, a lot of thought will be put into:

  • What data is being shared across the API.
  • Why this data sharing is required or wanted.
  • Who/what will be consuming the data (internal and/or external).
    • This will reveal any potential flaws or opportunities in assumptions that the business has made.
  • Ensuring that all business processes are made more efficient by the new agile data flow that APIs will provide.
  • Performance.
  • Reliability.
  • Scalability
  • Usability.
  • pricing.
  • Future changes/versioning.

n return, API First will have a causal effect on the business by ensuring that due diligence and research is followed before embarking on a costly venture of system build. A lot of thought will be put into what the business needs and why. Future projections and plans become much clearer.

This does not mean that all other resources sit in idle-mode until the API is built, in fact it can happen simultaneously. Once you have the API model built in documentation like OpenAPI or Redoc (quite early in the process), you have a model that can be used by consumers before you start the full development. This saves time and money on fixing broken assumptions, missing or inadequate requirements.

API as a Product

  1. The API may be the core product, but not necessarily.
  2. API as a product has 2 definitions:
    1. External-facing API that is monetised
    2. Indirect monetisation by a product that requires an API to function or public exposure and usage through a free public API.

API as a product, is tightly coupled to the API design-first concept. In order to implement business strategies for revenue driving APIs, good design and documentation is a “must-have” to enable third parties to consume the APIs, providing an easy-to-use, comprehensive and unified experience.

If the API product is well designed and documented, this will increase the consumption and efficiency of the API (and therefore income). Postman alone has seen very significant increases in traffic and API usage, despite the global downturn and Covid:

Since the 2021 State of the API report was released, the Postman API Platform saw significant surges in use:
* Users: 20 million
* Collections created: 38 million
* Requests created: 1.13 billion

https://www.postman.com/state-of-api/api-global-growth/#api-global-growth

Api First Design

In general, this is a development model for APIs that ensures the following criteria are met:

  • Reach agreement from all stakeholders.
  • Design the APIs models first on OpenAPI or similar product.
  • Build the API.
  • Separate products can be built at the same time as the API.

API design first may not be quite so obvious from the business POV, because at first view, it is just a development process. However, because it requires approval from all stakeholders, it is critical to have business “buy-in” to the process. This enables all of the business to see what/why the API is being built, and ensures that everyone is agreed on the most critical features they need.

In addition, this results in a rare instance of a complete understanding of what is being built and why, by the technical department. Instead of just being told to “build this” by the business without any consultation between them on what the rationale is, what is technically feasible, or if there is a better solution, the developers and department see the vision and are able to work towards that vision in a much more efficient manner and the end result will be a much better product.

Conclusion

This all ties in with discoveries and revelations being made about “Developer Experience”.

Up until recently, a lot of research and time has been invested un user experience (UX) and there is no denying the value that this can contribute to the business by increasing traffic and revenue. However a new key realisation is being made about developer experience (DevX). Who will be directly interacting with and consuming the APIs (and therefore making critical decisions that will impact their business about what APIs to consume and the frequency)? That right, developers!

They will correctly discard badly documented or difficult to use APIs in favour of better APIs. This means your product will be directly impacted by the DevX experience!

If you are supplying an external facing API for revenue (direct with monetisation or indirectly by offering APIs free with the associated traffic, publicity and exposure), then DevX is crucial to maximise income.

If you are supplying an internal facing API, then good DevX will have an impact on associated systems design, development and maintenance. Business processes that interact with the API will be either made more efficient by a well designed API or made cumbersome slow and inefficient by bad APIs.

Beta release is here!!!

We are excited to announce the official release of the Beta Version of ApiOpenStudio!

We have worked very hard on this release, and the release includes a lot of bug-fixes, enhancements and requested features. Here is a quick overview up the updates:

Beta release features

  • Updated the wiki:
  • Fixed issues in https://phpdoc.apiopenstudio.com.
  • Fixed caching:
    • Caching supports single and cluster cache servers, or on-server caching.
    • Added support for Redis.
    • Deprecated support for APC.
    • Implemented optional caching for individual processors.
  • Remote server and email outputs via plugins.
  • Created multiple plugins for remote output, see https://packagist.org/packages/apiopenstudio
  • Automated OpenApi document generation, supporting 1.x, 2.x, 3.x.
  • Ability to manage 3rd party packages and repositories within ApiOpenStudio.
  • Created two sample processor plugins
  • Implemented Fragments (process repeated calculations only once).
  • Added new processors to core:
    • JsonPath & XmlPath.
    • Cast.
    • Logic processors:
      • Do…While.
      • For…Each.
      • If…Then…Else.
      • Sequential.
    • Math.
    • Modules CRUD.
    • OpenApi CRUD.
  • Added more unit tests (all core functionality is now fully tested).
  • Added command line scripts (and API – so that it can be done using the GUI) to manage plugins.
  • Integrated the ApiOpenStudio CLI commands (install, update, module) with Composer.
  • Added a new page to the admin GUI to manage plugins.

We still have some known issues, and features that we want to build:

Tasks in the immediate roadmap

  • Begin the re-design and rebuild for the admin GUI with better UX and UI. This will include a drag ‘n’ drop interface to building resources.
  • Create a production docker for easier and secure installation.
  • Fix minor issues in the automated OpenAPI document generation
  • Add scanning for GET and POST variables to be automatically added to the OpenAPI documentation.
  • Mechanisms to upgrade or downgrade OpenAPI versions.
  • A mechanism to update OpenAPI if the domain is changed.
  • Add testing for PHP 8.1.
  • Add groups to permissions.

Free Support packages, for a limited time!

We now are seeing an exciting, 27% month on month growth in our install base of the Alpha version (ok “exciting” that’s a marketing term. We don’t normally use them, as that’s not us, but humour me – I think that’s what we are supposed to say).

We are particularly happy about this as we took the decision to launch ApiOpenStudio in an advanced form of Alpha. So we could get early feedback and build the best open source product possible. 

So to celebrate this, and the approaching Beta release, which we now estimate is now only 3 months away. 

We are offering a free 30 min support package to anyone who has downloaded the product. 

As this will support our early adopters, as well as give us the best possible feedback. The key to building the best product possible. 

This will be with our founder John (wherever possible), as we really want to get the maximum from this customer interaction.

So if you would like to take this up, please reach out to Matthew Greally on LinkedIn or the Contact Us form.

1.0.0-alpha3 Release

After finishing and closing a large chunk of tickets for the tickets that we have planned for the beta release, we had a minor panic…

Api Open Studio Admin has largely been neglected (because it was just supposed to be a quick fix MVP and will be completely replaced before v1.0.0) while we focused on the backbone part of this project: the API (Api Open Studio). However, it was no longer compiling, due to package.json issues and it needed updating to utilise and match changes in the API core resources that it consumed.

This has been quickly resolved and new release tags added so that are now on packagist:

If you are updating an existing instance of Api Open Studio, please make sure that you run the updates:

  • Log-in to the server that contains the API code
  • run ./includes/scripts/update.php

In addition,Api Open Studio Docker Dev has been updated to also implement PHP7.4 or PHP8.0.

Summary of changes in 1.0.0-alpha3:

  • Wholesale changes in the wiki
  • Changed the token auth to JWT tokens.
  • Updated gitlab-ci:
    • Use the new naala89/bookdown-rsync, naala89/phpdoc-rsync, naala89/apiopenstudio-nginx-php-7.4 and naala89/apiopenstudio-nginx-php-8.0 images.
    • Fixed gitlab runner artifacts.
    • Tests run on all merge requests and deploy to wiki/phpdoc on merges.
  • Deprecated Cascade logger and created a wrapper for Monolog.
  • Removed bookdown/bookdown from the composer dev dependencies.
  • Deprecated the Mapper processors.
  • Created new JsonPath and XmlPath processors.
  • Added functional tests for user and role.
  • Created new traits for datatype conversion.
  • Implemented casting on all input vars like VarPost.
  • Create/update CRUD processors now return the value result, rather than true/false.
  • Deprecated the Mapper processors.
  • New JsonPath processor.
  • New XmlPath processor.
  • New Cast processor.
  • Automated tests now for ApiOpenStudio on PHP7.4 & PHP8.0.
  • New code to make globally converting data types in processors easy.
  • You can now specify the expected input data type (automatically cast) in the following processors with a new expected_type attribute:
    • var_post
    • var_get
    • var_uri
    • var_request
    • var_body

2021 State of the API Report

The findings in this report are golden and kudos to the PostMan team for producing a well-balanced and researched report (https://www.postman.com/state-of-api/, 2021).

The full report is available at: https://www.postman.com/assets/api-survey-2021/postman-state-of-api-2021.pdf.

Its findings are highly encouraging, and reading between the lines, are a fantastic indicator that the industry is on target for a continued adoption of mobile-first, API-first and micro-service architecture.

Our key take aways from this report:

The API ecosystem is global and growing

Postman reports continuing growth in API activity:

  • Users: 17 million
  • Collections created: 30 million (up 39%)
  • Requests created: 855 million (up 56%)

There are many more people, other than developers using APIs

Breakdown of roles consuming APIs

Developers are spending more time with APIs

< 10 hours/week: 33%

10 – 20 hours/week: 39%

> 20 hours/week: 28%

This was a rather surprising set of stats, and probably due to the respondents coming from API driven developers.

API first methodology

Encouragingly, there is increased awareness of the API-first methodology, and more businesses are approaching their architecture in this way:

Companies embracing API-first methodology

Sadly, there was an inconsistent or lack of understanding of what API-first actually meant:

Defining API-first

Public vs Private vs Partner

Of interest here is that the vast majority of APIs are intended for private use within companies. This ties-in with the API-first methodology, where APIs are considered first-class citizens in the (Understanding the API-First Approach to Building Products, 2021)

APIs are treated as “first-class citizens.” That everything about a project revolves around the idea that the end product will be consumed by mobile devices, and that APIs will be consumed by client applications. An API-first approach involves developing APIs that are consistent and reusable, which can be accomplished by using an API description language to establish a contract for how the API is supposed to behave.

Public vs Private vs Partner
  • Private (only used by your team or your company): 58%
  • Partner (shared only with integration partners): 27%
  • Public (openly available on the web): 15%

Lack of time was the biggest obstacle to producing APIs

Over 45% of API developers claimed that their main impediment was lack of time.

JSON Schema is by far the biggest specification tool for APIs

JSON Schema was by far the top specification in use, cited by three-quarters of respondents

Now this one really surprised us (we had assumed it would be OpenAPI 3.0, but that was below Swagger 2.0. Considering that the API documentation standards have still not coalesced into an accepted standard, there should be no surprises in movements here, and it has to be said that JSON Schema is fantastic, especially for defining complex, nested object types.

Quality is the biggest priority for APIs, above security

Respondents identify the top priorities for their development teams and organisations

This was also a surprise to us, although it transpires that a lot of APIs consume public APIs and therefore we assume that when quality is specified, they mean quality of data and resource specification. Therefore leading to a better resource offering and more consumers of it.

Major change to the ApiOpenStudio repository location

In order to implement pipelines and docker, with automated builds of docker images, the ApiOpenStudio projects have all been added to a new ApiOpenStudio group in GitLab.

This will enable GitLab pipelines to orchestrate pipelines across all of the projects as code is pushed and merged.

There was a dependency on this for upcoming tickets and tasks, so the tasks could not be delayed any longer. Because of this change, we have merged the develop branch to master branch, because this will update the wiki and phpdoc to reflect these changes.

However a new release tag for packagist has not been generated at this stage, becase we are only a few tasks away from beta release.

New changes available in the master branch:

  • GitLab CI pipelines now faster, (#118 – closed).
  • Wiki pages updated (#118 – closed & #115 – closed).
  • Fixed CI artefacts not being uploaded on failure (#117 – closed).
  • Logging now works on PHP8.0 as well as PHP7.4 (#111 – closed).
    • This involved deprecating Cascade, and creating a wrapper for the awesome Monolog package.
  • Implemented full JWT token authentication (#101 – closed).
  • Fix automated unit and functional tests (#110 – closed).
  • The entire project code has been updated to ensure all the latest PHPdoc and coding standards are passed.
  • Fixed Packagist for apiopenstudio_admin – sorry, this was my bad – it was a copy and paste error that went unnoticed.

Contributors and developers using the codebase

If you have a clone of the Gitlab repository, you will need to update your remote branch with the following command (assuming you have cloned with SSH):

git remote set-url origin git@gitlab.com:apiopenstudio/apiopenstudio.git

If you have a clone of the GitLab repository, you will need to update your remote branch with the following command (assuming you have cloned with SSH):

git remote set-url origin git@github.com:naala89/apiopenstudio.git

If you have forked the Gitlab repository, you can update the upstream URL:

git remote set-url upstream git@gitlab.com:apiopenstudio/apiopenstudio.git

The updated URLs

The new Group URL’s

The GitLab project URL’s

The GitHub mirror URL’s

Exciting upcoming features for the Beta release

  • Unit and Functional testing for PHP8.0 to ensure working across all contemporary PHP versions.
  • Composer 2.0 should be fine, but this should be tested before Beta release.
  • Swagger processor will be brought up to dat and fixed to allow importing and exporting of OpenApi documents.
  • Automated tagging and generation of an ApiOpenStudio Docker image

Are you hitting the low-code sweet spot?

Low-code solutions, as part of your IT landscape, are clearly gaining continuous traction. Low-code now, actually has its own Gartner Magic Quadrant!

Whilst a survey by the other big gun: Forrester, has said that in 2019, 37% of developers in Forrester’s worldwide survey were using or planning to use low-code products. By mid 2020, they predict that this number will rise to more than half of developers.

Finally, to complete the trifecta, CapGemini have now included low-code in their “Top Ten Trends. So all three planets are aligned.

Forrester research found that 100% of enterprises who have implemented a Low-Code development platform have received ROI (Forrester 2019, Large Enterprises
Succeeding With Low-Code
, viewed 23 June 2021, https://assets.appian.com/uploads/2019/03/forrester-tlp-lowcode.pdf).

ButAs ever, a lot of what we read out there is a mix of genuine analysis and the marketing objectives of the company writing it. The question really becomes. Are your low-code strategy and applications hitting your “low-code Sweet Spot”?

What low-code solutions do you need & where? How big should you start with low-code? Who do they enhance? Also, importantly, where shouldn’t you use them?

It’s worth remembering that companies can go too far, trying to remove developer costs. Using low-code the wrong way or too widely can severely limit straight Jacket development options.

Developers and low-code

There is an ideal mix of 4 Key areas. That varies with each business & its development needs:

  • High level expensive developer talent.
  • Less experienced and lower cost developers.
  • The right people with skills to access low-code & no-code solutions.
  • What the industry is now calling “Citizen Developers” (keeping in mind they often know your business processes & requirements better than anyone).

Do you have the right low-code app in place? So your expensive front-end developers don’t have to hand the requirements of an API to an equally expensive back-end developer (who is juggling this with another task that is equally mission critical), even though the front-end Dev has little on that week & will move to lower value tasks.

Or to take advantage of the extra efficiency in the fact that they both no longer have to dedicate time to the communication of what the front-end developer wants?

Communications tasks are typically underestimated costs

With a low-code solution like ApiOpenStudio, front-end developers can go straight to API creation. This can be great if you need to even out the load in a team where they might otherwise be cooling their jets on less important tasks, where they have to spend time defining the API and then send it on to back-end developers to implement.

This flexibility and being able to quantify it is the key to tuning your low-code mix, as the team will become more efficient. 

Finally if they are both flat out, can a lesser developer or in the right environment a cross trained “Citizen Developer” with basic JSON or YAML skills be deployed? Ideally they should be close to the project and its requirements. 

Low-code enables members of the team closer to the requirements & product or project development to build and manage an API themselves. Using, and in many cases, replacing the time they would have used to communicate this to others with actually developing the product.

Equality does not exist in low or no-code

Low-code and no-code platforms exist on a spectrum. On one extreme, you have platforms offering very basic functionalities – i.e. simple form and logic creation, combined with rudimentary document automation capabilities. On the other, you have platforms allowing citizen developers to build large, end to end workflow solutions, encompassing features like e-signature integrations, multi-step approvals, email reminders and data management.

So time and thought needs to put into the use-cases that you want to address with low-code implementations. This will prevent you facing the, often frustrating situations that project or product managers, when developers reply “nope, that can’t be done” due to the limitations of the software.

The balance

Like just about all movements in IT that become long-term, there is still a lot more to it in terms of taking it to your business and marketplace than the initial Marketing Hype. The real sustainable change is almost always different and requires a deeper understanding of how things really work to make sure the rubber hits the road.

So what do you really need to consider to realise the value of low-code across an organisation? 

The fact is that low-code involves a trade-off, that is worth doing, but a trade-off nonetheless. 

On the one hand, low-code enables those closest to the product and business requirements to build what they need and build it faster. It eliminates layers of process and management… business units can, in the right environment, move forward without consulting IT. Low-code makes business Agility happen, as it changes how the business works with software.

HOWEVER…… 

The fact is, though highly effective for many businesses, with low-code, the MORE you use it, the more you straighten your development. That is the trade off. 

This is one of the reasons why pro-code (or pure developers) have little to fear from low-code. Though surveys show many of them fear this, it is not shown in the data. Particularly during the next decade, where Microsoft recently estimated that there would be a shortfall of one million developers in the USA alone. 

Being able to plan and resource your company’s low-code mix, as well as advise where it is not appropriate 
(like when your CFO thinks he can do all with low-code just to save money!!) is becoming part of the career skill set for professional developers.

How low can you go?

Low-code, by definition also enables Fast Followers. As they have a pathway to follow that is quicker and lower revalue. So I would think twice about ever letting your marketing dept tell the world how you got there.

We think it’s important to realise (after years of researching & discussing this market trend with stakeholders) 
low-code and pro-code do not cancel each other out. No organisation should aim to be one or the other.

So the “Democratisation of development”, like all of the most successful democracies… need good checks and balances. judges, oversight and impartiality in the execution.

Summary

So as you would expect, there are quantifiable :aspects to this:

Is it giving you enough power, while liberating you from increasing development cost? Due to the rising price of developers and the need for an increasing number of developers, as companies race to meet the demand for providing richer digital experiences.

Whole platforms for this is not the place to start, & may not be the place to go. But starting with something like API creation and management can reduce both cost of running the internal Apps, the outward business and web apps that the customer sees. In most cases, these apps will rely heavily on external feeds and there is a high benefit in the low-code approach to this.

Increased security and speed with JWT tokens

Current dev work is almost complete for implementing authorisation with JWT token for all resources! This will be part of the upcoming Beta release.

The ticket can be viewed in Gitlab.

This will replace the existing alpha version of a custom token and token TTL for each user in the user table.

It is quite important to note, before we move on, that JWT tokens are a different thing to oauth2, implicit grant, explicit grant, application grant and PKSE authorisation flow. JWT is only a standard for tokens. If you need to implement oauth2 or other similar workflows this is separate from the JWT implementation.

The problem

The problem with the former approach, was that resource requests had to make DB calls to the user, user_roles, roles, account and application tables in order to verify user permissions to that particular resource, FOR EVERY API CALL. This obviously negatively impacted performance for API calls.

This also meant that authorisation was not easily scalable to authorisation servers for enterprise implementation, because the implementation of the token and authorisation for API calls was tightly coupled to the ApiOpenStudio database and several of its tables.

The solution

Although the former approach was stateful (it maintained login state, so users could login and out), the stateless JWT token approach means that the token does not need to be stored in the database. The downside of stateless JWT tokens, is that there is no logout state. So if a user’s access is revoked, they will still have access to resources until their current token goes stale.

However, this can be mitigated by making the JWT token lifetime short in the ApiOpenStudio configuration.

Each JWT token contains custom claims for user ID and all roles that that user has. So when the initial request is received by ApiOpenStudio, it just decrypts the token and validates the user’s roles against the resources account/application and permissible user roles (i.e. Does the current user have the required role access to the account & application?).

Knock-on effects

The following processors have been retired:

  • user_login.
  • user_logout.

Nearly all core resources have been updated use the new processors:

  • generate_token (generate a valid JWT token for a user, with custom claims: uid, user roles).
  • validate_token (validate the Authorization token as a valild JWT token).
  • validate_token_roles ((validate the Authorization token as a valild JWT token and also validation the user has the correct role permissions for the resource).
  • bearer_token (not used by core atm, but preserved for any processors that need access to the bearer token).

Processors have been optimised, now that they do not need to do any pre-validation on who can do what – this is left to the core resource definitions.

Tests are updated to incorporate the changes, and also now have multiple test users with different roles.

The good news

Not only has this significantly improved the API response time, it has now made the API much more scalable for enterprise. We communicated and researched several major 3rd party authorisation services, including auth0, to make sure that the decision to move to JWT tokens and custom claims would still be viable if a 3rd party auth server was used.

Most 3rd party authorisation services implement linking into external databases, so that would take the heat off the api server for token generation, and allow the token generation to be completely decoupled from ApiOpenStudio. This will be the subject of a future post.

Joining the API economy

We’ve all heard about the API economy and the extra revenue it can provide while increasing the network and visibility of the business. We will be discussing the processes and advice for how you would actually join the API economy.

Types of API’s

There are basically two areas of API’s:

  • Internal API’s that are never exposed to the outside world, and are generally intended for a micro-service architecture. The benefits and challenges of this will be discussed in a separate post.
  • Externally exposed API’s that offer data and services to 3rd parties. These can either be free or paid.

This post will deal with externally exposed API’s. Purely internal API’s are not strictly part of the API economy, these are services within the company.

Moving into the API economy

The decision to move into the API economy might require a cultural shift within your business, and one that can be that would be very beneficial. It is primarily a business decision, rather than being left solely to the IT department to find ways of using the data that they have collected for the benefit of the business. This is a good thing! It requires all of the business to get together and decide on what data they want to share, is there already enough data to share, what extra data and metrics need to be collected, how will this be collected, does the data need to be changed. etc.

Approach

I would recommend taking a top-down approach to this, rather than launching your IT dept into coding your great idea. The planning of this is very much a business decision, and each department should be involved at nearly every stage, as you move from project inception to meetings and discussions of potential merits of the plan and ideas this will spawn, through to final planning and execution.

This might require a cultural change in your departments, as the different departments start to think about what assets they have or can create to be added to the API suite. They will probably find that they need to change processes and approaches in order to fully embrace this.

REST APIs

Defining what a REST API can do is a separate topic for another post. But essentially, it is built on the rather convenient request types in a HTML request:

  • POST
  • GET
  • PUSH/PUT
  • DELETE

These allow for Create, Read, Update and Delete requests to be made over the API. If you want to impress your IT team, the acronym for this is CRUD. Thus, you can merely Read (i.e. GET) data or you can also Create (POST), Update (Push or Put) and Delete (DELETE) data.

GraphQL APIs

Defining what a GraphQL API can do is another separate topic for a post. But essentially, it is addresses one of the shortcomings of the REST structure: meta-links.

REST has a shortcoming in that you cannot specify data selection parameters and related items in the same request without a custom attributes in the query. So this leads to multiple round trips and requests, e.g. fetch all posts, then the for each post. Each of these items would then contain links for subsequent requests to fetch each things like comments or taxonomy terms for each post. This can significantly increase the data loading time.

GraphQL addresses this problem by allowing an API request to include data structure and request elements in it. Thus, you can fetch your data in one request.

Commercial benefits

Commercial benefits should be made to either make the API’s free or only accessed through a payment gateway and account access to the API’s. Once that is decided upon, Security and volume loads need to be considered. With the explosion of free and commercially driven API’s along with the massive increase in Javascript frameworks and headless architecture, traffic for the could potentially be high, so provision will have to made in the server architecture to be scalable. This is a huge topic for a separate post.

Thought should be made into what service you are providing to 3rd parties and customers:

  • What benefits will they get from these new data and service endpoints?
  • How easy will it be to use and access?
  • What will the format of the data be?
  • Will the customers require any customisation and tailoring to their needs of the services? For instance, Uber’s custom requirements from Google maps API’s
  • Is there a business model for customisation, etc?

If access to the API is going to be limited to paying customers or selected 3rd parties then access control needs to be implemented. This is where ApiOpenStudio and some other API frameworks come into their own. You can define users, departmental and account roles for individual users or groups and then define what access rights these roles have to individual API resources. Perhaps you only want to give a 3rd party Read access to specific data, whilst giving one of your departments full Create/Read/Update/Delete access to all or a subset of the data. Maybe your API model wants to enable a 3rd party or department the ability to control their own silo’d data – so that data would be private to them, but they would have Create/Read/Update/Delete to their own data and only they would have access to it over the API’s (with the exception of you monitoring the data for security, API request rates and data volume control).

Creating your APIs

Before you dive straight into creation of the API’s, you should also consider the API’s from the user’s viewpoint. How easy will they be to use, do they provide data in the format that is most easy for me to consume, how will I discover these resources, i there any benefit for me to create code to consume the API’s, what other competitive resources are out there, are they better?

Once you have decided on the basic API model that you want to provide, you can start getting down to the nitty gritty of defining each resource and what it will do. ApiOpenStudio, and paid-for-services like MuleSoft will allow you to import API resource definitions from Swagger. If the API resources need processing logic on the data before final delivery, this should be defined and created. This is very simple in ApiOpenStudio, it is designed specifically to make this quick and easy. Meaning you do not need to employ expensive developers who are experts in a specific coding language to implement them (which can also be a time costly exercise).

Once you are ready to go, you need to pay specific attention the marketing of the new API suite. If you just put it out there and wait for the customers to come in, it is almost certainly going to fail. It is very important to put thought into how you will let people and companies know about the API. Maybe an email blast to your customers, creation of a specific website for the suite to expose it to the public, blogging, getting listed in aggregate listings of API’s, etc.

Alpha release is nearly there!

Alpha release is so close we can touch it! We are getting very excited about this now!

This is the final countdown, with only a couple of weeks left:

  • This site is nearing completion
  • The ApiOpenStudio has gone through a major refactor after the pre-release branding change.
  • We have about 90% of my Functional tests reviewed and working.
  • Domains and servers are setup and working nicely
  • The wiki has been worked into and worked into and is looking nice
  • The PHPDoc is performing nicely
  • GitLab pipelines is now pretty much working as we want it to:
    • Linting on all merges and PR’s
    • Compiling and deploying the wiki to dev and prod
    • Compiling and deploying the PHPDoc to dev and prod
  • The GitHub mirror has been set up

I’m disappointed that I’ve so far been unsuccessful in integrating the Codeception tests with GitLab CI. I really wanted that ready for the initial release. However, since this is Beta release, I’ll aim to get that ready before the Alpha release.

See /roadmap for more details of the ongoing work.

WP Twitter Auto Publish Powered By : XYZScripts.com