1.0.0-alpha3 Release

After finishing and closing a large chunk of tickets for the tickets that we have planned for the beta release, we had a minor panic…

Api Open Studio Admin has largely been neglected (because it was just supposed to be a quick fix MVP and will be completely replaced before v1.0.0) while we focused on the backbone part of this project: the API (Api Open Studio). However, it was no longer compiling, due to package.json issues and it needed updating to utilise and match changes in the API core resources that it consumed.

This has been quickly resolved and new release tags added so that are now on packagist:

If you are updating an existing instance of Api Open Studio, please make sure that you run the updates:

  • Log-in to the server that contains the API code
  • run ./includes/scripts/update.php

In addition,Api Open Studio Docker Dev has been updated to also implement PHP7.4 or PHP8.0.

Summary of changes in 1.0.0-alpha3:

  • Wholesale changes in the wiki
  • Changed the token auth to JWT tokens.
  • Updated gitlab-ci:
    • Use the new naala89/bookdown-rsync, naala89/phpdoc-rsync, naala89/apiopenstudio-nginx-php-7.4 and naala89/apiopenstudio-nginx-php-8.0 images.
    • Fixed gitlab runner artifacts.
    • Tests run on all merge requests and deploy to wiki/phpdoc on merges.
  • Deprecated Cascade logger and created a wrapper for Monolog.
  • Removed bookdown/bookdown from the composer dev dependencies.
  • Deprecated the Mapper processors.
  • Created new JsonPath and XmlPath processors.
  • Added functional tests for user and role.
  • Created new traits for datatype conversion.
  • Implemented casting on all input vars like VarPost.
  • Create/update CRUD processors now return the value result, rather than true/false.
  • Deprecated the Mapper processors.
  • New JsonPath processor.
  • New XmlPath processor.
  • New Cast processor.
  • Automated tests now for ApiOpenStudio on PHP7.4 & PHP8.0.
  • New code to make globally converting data types in processors easy.
  • You can now specify the expected input data type (automatically cast) in the following processors with a new expected_type attribute:
    • var_post
    • var_get
    • var_uri
    • var_request
    • var_body

2021 State of the API Report

The findings in this report are golden and kudos to the PostMan team for producing a well-balanced and researched report (https://www.postman.com/state-of-api/, 2021).

The full report is available at: https://www.postman.com/assets/api-survey-2021/postman-state-of-api-2021.pdf.

Its findings are highly encouraging, and reading between the lines, are a fantastic indicator that the industry is on target for a continued adoption of mobile-first, API-first and micro-service architecture.

Our key take aways from this report:

The API ecosystem is global and growing

Postman reports continuing growth in API activity:

  • Users: 17 million
  • Collections created: 30 million (up 39%)
  • Requests created: 855 million (up 56%)

There are many more people, other than developers using APIs

Breakdown of roles consuming APIs

Developers are spending more time with APIs

< 10 hours/week: 33%

10 – 20 hours/week: 39%

> 20 hours/week: 28%

This was a rather surprising set of stats, and probably due to the respondents coming from API driven developers.

API first methodology

Encouragingly, there is increased awareness of the API-first methodology, and more businesses are approaching their architecture in this way:

Companies embracing API-first methodology

Sadly, there was an inconsistent or lack of understanding of what API-first actually meant:

Defining API-first

Public vs Private vs Partner

Of interest here is that the vast majority of APIs are intended for private use within companies. This ties-in with the API-first methodology, where APIs are considered first-class citizens in the (Understanding the API-First Approach to Building Products, 2021)

APIs are treated as “first-class citizens.” That everything about a project revolves around the idea that the end product will be consumed by mobile devices, and that APIs will be consumed by client applications. An API-first approach involves developing APIs that are consistent and reusable, which can be accomplished by using an API description language to establish a contract for how the API is supposed to behave.

Public vs Private vs Partner
  • Private (only used by your team or your company): 58%
  • Partner (shared only with integration partners): 27%
  • Public (openly available on the web): 15%

Lack of time was the biggest obstacle to producing APIs

Over 45% of API developers claimed that their main impediment was lack of time.

JSON Schema is by far the biggest specification tool for APIs

JSON Schema was by far the top specification in use, cited by three-quarters of respondents

Now this one really surprised us (we had assumed it would be OpenAPI 3.0, but that was below Swagger 2.0. Considering that the API documentation standards have still not coalesced into an accepted standard, there should be no surprises in movements here, and it has to be said that JSON Schema is fantastic, especially for defining complex, nested object types.

Quality is the biggest priority for APIs, above security

Respondents identify the top priorities for their development teams and organisations

This was also a surprise to us, although it transpires that a lot of APIs consume public APIs and therefore we assume that when quality is specified, they mean quality of data and resource specification. Therefore leading to a better resource offering and more consumers of it.

Major change to the ApiOpenStudio repository location

In order to implement pipelines and docker, with automated builds of docker images, the ApiOpenStudio projects have all been added to a new ApiOpenStudio group in GitLab.

This will enable GitLab pipelines to orchestrate pipelines across all of the projects as code is pushed and merged.

There was a dependency on this for upcoming tickets and tasks, so the tasks could not be delayed any longer. Because of this change, we have merged the develop branch to master branch, because this will update the wiki and phpdoc to reflect these changes.

However a new release tag for packagist has not been generated at this stage, becase we are only a few tasks away from beta release.

New changes available in the master branch:

  • GitLab CI pipelines now faster, (#118 – closed).
  • Wiki pages updated (#118 – closed & #115 – closed).
  • Fixed CI artefacts not being uploaded on failure (#117 – closed).
  • Logging now works on PHP8.0 as well as PHP7.4 (#111 – closed).
    • This involved deprecating Cascade, and creating a wrapper for the awesome Monolog package.
  • Implemented full JWT token authentication (#101 – closed).
  • Fix automated unit and functional tests (#110 – closed).
  • The entire project code has been updated to ensure all the latest PHPdoc and coding standards are passed.
  • Fixed Packagist for apiopenstudio_admin – sorry, this was my bad – it was a copy and paste error that went unnoticed.

Contributors and developers using the codebase

If you have a clone of the Gitlab repository, you will need to update your remote branch with the following command (assuming you have cloned with SSH):

git remote set-url origin git@gitlab.com:apiopenstudio/apiopenstudio.git

If you have a clone of the GitLab repository, you will need to update your remote branch with the following command (assuming you have cloned with SSH):

git remote set-url origin git@github.com:naala89/apiopenstudio.git

If you have forked the Gitlab repository, you can update the upstream URL:

git remote set-url upstream git@gitlab.com:apiopenstudio/apiopenstudio.git

The updated URLs

The new Group URL’s

The GitLab project URL’s

The GitHub mirror URL’s

Exciting upcoming features for the Beta release

  • Unit and Functional testing for PHP8.0 to ensure working across all contemporary PHP versions.
  • Composer 2.0 should be fine, but this should be tested before Beta release.
  • Swagger processor will be brought up to dat and fixed to allow importing and exporting of OpenApi documents.
  • Automated tagging and generation of an ApiOpenStudio Docker image

Are you hitting the low-code sweet spot?

Low-code solutions, as part of your IT landscape, are clearly gaining continuous traction. Low-code now, actually has its own Gartner Magic Quadrant!

Whilst a survey by the other big gun: Forrester, has said that in 2019, 37% of developers in Forrester’s worldwide survey were using or planning to use low-code products. By mid 2020, they predict that this number will rise to more than half of developers.

Finally, to complete the trifecta, CapGemini have now included low-code in their “Top Ten Trends. So all three planets are aligned.

Forrester research found that 100% of enterprises who have implemented a Low-Code development platform have received ROI (Forrester 2019, Large Enterprises
Succeeding With Low-Code
, viewed 23 June 2021, https://assets.appian.com/uploads/2019/03/forrester-tlp-lowcode.pdf).

ButAs ever, a lot of what we read out there is a mix of genuine analysis and the marketing objectives of the company writing it. The question really becomes. Are your low-code strategy and applications hitting your “low-code Sweet Spot”?

What low-code solutions do you need & where? How big should you start with low-code? Who do they enhance? Also, importantly, where shouldn’t you use them?

It’s worth remembering that companies can go too far, trying to remove developer costs. Using low-code the wrong way or too widely can severely limit straight Jacket development options.

Developers and low-code

There is an ideal mix of 4 Key areas. That varies with each business & its development needs:

  • High level expensive developer talent.
  • Less experienced and lower cost developers.
  • The right people with skills to access low-code & no-code solutions.
  • What the industry is now calling “Citizen Developers” (keeping in mind they often know your business processes & requirements better than anyone).

Do you have the right low-code app in place? So your expensive front-end developers don’t have to hand the requirements of an API to an equally expensive back-end developer (who is juggling this with another task that is equally mission critical), even though the front-end Dev has little on that week & will move to lower value tasks.

Or to take advantage of the extra efficiency in the fact that they both no longer have to dedicate time to the communication of what the front-end developer wants?

Communications tasks are typically underestimated costs

With a low-code solution like ApiOpenStudio, front-end developers can go straight to API creation. This can be great if you need to even out the load in a team where they might otherwise be cooling their jets on less important tasks, where they have to spend time defining the API and then send it on to back-end developers to implement.

This flexibility and being able to quantify it is the key to tuning your low-code mix, as the team will become more efficient. 

Finally if they are both flat out, can a lesser developer or in the right environment a cross trained “Citizen Developer” with basic JSON or YAML skills be deployed? Ideally they should be close to the project and its requirements. 

Low-code enables members of the team closer to the requirements & product or project development to build and manage an API themselves. Using, and in many cases, replacing the time they would have used to communicate this to others with actually developing the product.

Equality does not exist in low or no-code

Low-code and no-code platforms exist on a spectrum. On one extreme, you have platforms offering very basic functionalities – i.e. simple form and logic creation, combined with rudimentary document automation capabilities. On the other, you have platforms allowing citizen developers to build large, end to end workflow solutions, encompassing features like e-signature integrations, multi-step approvals, email reminders and data management.

So time and thought needs to put into the use-cases that you want to address with low-code implementations. This will prevent you facing the, often frustrating situations that project or product managers, when developers reply “nope, that can’t be done” due to the limitations of the software.

The balance

Like just about all movements in IT that become long-term, there is still a lot more to it in terms of taking it to your business and marketplace than the initial Marketing Hype. The real sustainable change is almost always different and requires a deeper understanding of how things really work to make sure the rubber hits the road.

So what do you really need to consider to realise the value of low-code across an organisation? 

The fact is that low-code involves a trade-off, that is worth doing, but a trade-off nonetheless. 

On the one hand, low-code enables those closest to the product and business requirements to build what they need and build it faster. It eliminates layers of process and management… business units can, in the right environment, move forward without consulting IT. Low-code makes business Agility happen, as it changes how the business works with software.

HOWEVER…… 

The fact is, though highly effective for many businesses, with low-code, the MORE you use it, the more you straighten your development. That is the trade off. 

This is one of the reasons why pro-code (or pure developers) have little to fear from low-code. Though surveys show many of them fear this, it is not shown in the data. Particularly during the next decade, where Microsoft recently estimated that there would be a shortfall of one million developers in the USA alone. 

Being able to plan and resource your company’s low-code mix, as well as advise where it is not appropriate 
(like when your CFO thinks he can do all with low-code just to save money!!) is becoming part of the career skill set for professional developers.

How low can you go?

Low-code, by definition also enables Fast Followers. As they have a pathway to follow that is quicker and lower revalue. So I would think twice about ever letting your marketing dept tell the world how you got there.

We think it’s important to realise (after years of researching & discussing this market trend with stakeholders) 
low-code and pro-code do not cancel each other out. No organisation should aim to be one or the other.

So the “Democratisation of development”, like all of the most successful democracies… need good checks and balances. judges, oversight and impartiality in the execution.

Summary

So as you would expect, there are quantifiable :aspects to this:

Is it giving you enough power, while liberating you from increasing development cost? Due to the rising price of developers and the need for an increasing number of developers, as companies race to meet the demand for providing richer digital experiences.

Whole platforms for this is not the place to start, & may not be the place to go. But starting with something like API creation and management can reduce both cost of running the internal Apps, the outward business and web apps that the customer sees. In most cases, these apps will rely heavily on external feeds and there is a high benefit in the low-code approach to this.

Increased security and speed with JWT tokens

Current dev work is almost complete for implementing authorisation with JWT token for all resources! This will be part of the upcoming Beta release.

The ticket can be viewed in Gitlab.

This will replace the existing alpha version of a custom token and token TTL for each user in the user table.

It is quite important to note, before we move on, that JWT tokens are a different thing to oauth2, implicit grant, explicit grant, application grant and PKSE authorisation flow. JWT is only a standard for tokens. If you need to implement oauth2 or other similar workflows this is separate from the JWT implementation.

The problem

The problem with the former approach, was that resource requests had to make DB calls to the user, user_roles, roles, account and application tables in order to verify user permissions to that particular resource, FOR EVERY API CALL. This obviously negatively impacted performance for API calls.

This also meant that authorisation was not easily scalable to authorisation servers for enterprise implementation, because the implementation of the token and authorisation for API calls was tightly coupled to the ApiOpenStudio database and several of its tables.

The solution

Although the former approach was stateful (it maintained login state, so users could login and out), the stateless JWT token approach means that the token does not need to be stored in the database. The downside of stateless JWT tokens, is that there is no logout state. So if a user’s access is revoked, they will still have access to resources until their current token goes stale.

However, this can be mitigated by making the JWT token lifetime short in the ApiOpenStudio configuration.

Each JWT token contains custom claims for user ID and all roles that that user has. So when the initial request is received by ApiOpenStudio, it just decrypts the token and validates the user’s roles against the resources account/application and permissible user roles (i.e. Does the current user have the required role access to the account & application?).

Knock-on effects

The following processors have been retired:

  • user_login.
  • user_logout.

Nearly all core resources have been updated use the new processors:

  • generate_token (generate a valid JWT token for a user, with custom claims: uid, user roles).
  • validate_token (validate the Authorization token as a valild JWT token).
  • validate_token_roles ((validate the Authorization token as a valild JWT token and also validation the user has the correct role permissions for the resource).
  • bearer_token (not used by core atm, but preserved for any processors that need access to the bearer token).

Processors have been optimised, now that they do not need to do any pre-validation on who can do what – this is left to the core resource definitions.

Tests are updated to incorporate the changes, and also now have multiple test users with different roles.

The good news

Not only has this significantly improved the API response time, it has now made the API much more scalable for enterprise. We communicated and researched several major 3rd party authorisation services, including auth0, to make sure that the decision to move to JWT tokens and custom claims would still be viable if a 3rd party auth server was used.

Most 3rd party authorisation services implement linking into external databases, so that would take the heat off the api server for token generation, and allow the token generation to be completely decoupled from ApiOpenStudio. This will be the subject of a future post.

WP Twitter Auto Publish Powered By : XYZScripts.com