RSS

API Real Time News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.

Bringing The API Deployment Landscape Into Focus

I am finally getting the time to invest more into the rest of my API industry guides, which involves deep dives into core areas of my research like API definitions, design, and now deployment. The outline for my API deployment research has begun to come into focus and looks like it will rival my API management research in size.

With this release, I am looking to help onboard some of my less technical readers with API deployment. Not the technical details, but the big picture, so I wanted to start with some simple questions, to help prime the discussion around API development.

  • Where? - Where are APIs being deployed. On-premise, and in the clouds. Traditional website hosting, and even containerized and serverless API deployment.
  • How? - What technologies are being used to deploy APIs? From using spreadsheets, document and file stores, or the central database. Also thinking smaller with microservices, containes, and serverless.
  • Who? - Who will be doing the deployment? Of course, IT and developers groups will be leading the charge, but increasingly business users are leveraging new solutions to play a significant role in how APIs are deployed.

The Role Of API Definitions While not every deployment will be auto-generated using an API definition like OpenAPI, API definitions are increasingly playing a lead role as the contract that doesn’t just deploy an API, but sets the stage for API documentation, testing, monitoring, and a number of other stops along the API lifecycle. I want to make sure to point out in my API deployment research that API definitions aren’t just overlapping with deploying APIs, they are essential to connect API deployments with the rest of the API lifecycle.

Using Open Source Frameworks Early on in this research guide I am focusing on the most common way for developers to deploy an API, using an open source API framework. This is how I deploy my APIs, and there are an increasing number of open source API frameworks available out there, in a variety of programming languages. In this round I am taking the time to highlight at least six separate frameworks in the top programming languages where I am seeing sustained deployment of APIs using a framework. I don’t take a stance on any single API framework, but I do keep an eye on which ones are still active, and enjoying usag bey developers.

Deployment In The Cloud After frameworks, I am making sure to highlight some of the leading approaches to deploying APIs in the cloud, going beyond just a server and framework, and leveraging the next generation of API deployment service providers. I want to make sure that both developers and business users know that there are a growing number of service providers who are willing to assist with deployment, and with some of them, no coding is even necessary. While I still like hand-rolling my APIs using my peferred framework, when it comes to some simpler, more utility APIs, I prefer offloading the heavy lifting to a cloud service, and save me the time getting my hands dirty.

Essential Ingredients for Deployment Whether in the cloud, on-premise, or even on device and even the network, there are some essential ingredients to deploying APIs. In my API deployment guide I wanted to make sure and spend some time focusing on the essential ingredients every API provider will have to think about.

-Compute - The base ingredient for any API, providing the compute under the hood. Whether its baremetal, cloud instances, or serverless, you will need a consistent compute strategy to deploy APIs at any scale. -Storage - Next, I want to make sure my readers are thinking about a comprehensive storage strategy that spans all API operations, and hopefully multiple locations and providers. -DNS - Then I spend some time focusing on the frontline of API deployment–DNS. In todays online environment DNS is more than just addressing for APIs, it is also security. -Encryption - I also make sure encryption is baked in to all API deployment by default in both transit, and storage.

Some Of The Motivations Behind Deploying APIs In previous API deployment guides I usually just listed the services, tools, and other resources I had been aggregating as part of my monitoring of the API space. Slowly I have begun to organize these into a variety of buckets that help speak to many of the motivations I encounter when it comes to deploying APIs. While not a perfect way to look at API deployment, it helps me thinking about the many reasons people are deploying APIs, and craft a narrative, and provide a guide for others to follow, that is potentially aligned with their own motivations.

  • Geographic - Thinking about the increasing pressure to deploy APIs in specific geographic regions, leveraging the expansion of the leading cloud providers.
  • Virtualization - Considering the fact that not all APIs are meant for production and there is a lot to be learned when it comes to mocking and virtualizing APIs.
  • Data - Looking at the simplest of Create, Read, Update, and Delete (CRUD) APIs, and how data is being made more accessible by deploying APIs.
  • Database - Also looking at how APIs are beign deployed from relational, noSQL, and other data sources–providing the most common way for APIs to be deployed.
  • Spreadsheet - I wanted to make sure and not overlook the ability to deploy APIs directly from a spreadsheet making APIs are within reach of business users.
  • Search - Looking at how document and content stores are being indexed and made searchable, browsable, and accessible using APIs.
  • Scraping - Another often overlooked way of deploying an API, from the scraped content of other sites–an approach that is alive and well.
  • Proxy - Evolving beyond early gateways, using a proxy is still a valid way to deploy an API from existing services.
  • Rogue - I also wanted to think more about some of the rogue API deployments I’ve seen out there, where passionate developers reverse engineer mobile apps to deploy a rogue API.
  • Microservices - Microservices has provided an interesting motivation for deploying APIs–one that potentially can provide small, very useful and focused API deployments.
  • Containers - One of the evolutions in compute that has helped drive the microservices conversation is the containerization of everything, something that compliments the world of APis very well.
  • Serverless - Augmenting the microservices and container conversation, serverless is motivating many to think differently about how APIs are being deployed.
  • Real Time - Thinking briefly about real time approaches to APIs, something I will be expanding on in future releases, and thinking more about HTTP/2 and evented approaches to API deployment.
  • Devices - Considering how APis are beign deployed on device, when it comes to Internet of Things, industrial deployments, as well as even at the network level.
  • Marketplaces - Thinking about the role API marketplaces like Mashape (now RapidAPI) play in the decision to deploy APIs, and how other cloud providers like AWS, Google, and Azure will play in this discussion.
  • Webhooks - Thinking of API deployment as a two way street. Adding webhooks into the discussion and making sure we are thinking about how webhooks can alleviate the load on APIs, and push data and content to external locations.
  • Orchestration - Considering the impact of continous integration and deployment on API deploy specifically, and looking at it through the lens of the API lifecycle.

I feel like API deployment is still all over the place. The mandate for API management was much better articulated by API service providers like Mashery, 3Scale, and Apigee. Nobody has taken the lead when it came to API deployment. Service providers like DreamFactory and Restlet have kicked ass when it comes to not just API management, but making sure API deployment was also part of the puzzle. Newer API service providers like Tyk are also pusing the envelope, but I still don’t have the number of API deployment providers I’d like, when it comes to referring my readers. It isn’t a coincidence that DreamFactory, Restlet, and Tyk are API Evangelist partners, it is because they have the services I want to be able to recommend to my readers.

This is the first time I have felt like my API deployment research has been in any sort of focus. I carved this layer of my research of my API management research some years ago, but I really couldn’t articulate it very well beyond just open source frameworks, and the emerging cloud service providers. After I publish this edition of my API deployment guide I’m going to spend some time in the 17 areas of my research listed above. All these areas are heavily focused on API deployment, but I also think they are all worth looking at individually, so that I can better understand where they also intersect with other areas like management, testing, monitoring, security, and other stops along the API lifecycle.


Real Time Is Often More About What They Desire Than What We Want

There are many definitions of what exactly constitutes "real time". I find it is a very relative thing, depending on who you talk to. When asked, many will respond with push notifications as an example. Others immediately think chat and messaging. If you are talking to developers they will reference specific technology like XMPP, Jabber, and WebSockets.

Real time is relative. It is relative to the situation, and to those involved. I'd say real time also isn't good by default, in all situations. The need for real time might change or evolve, and mean different things in different industries. All of this variance really opens up the concept for a lot of manipulation and abuse.

I feel like those who are wielding real time often speak of the benefits to us when in reality it is about real time in service of what they desire. They want a real time channel to you so they can push to you anytime, and get the desired action they are looking for (ie. click, view, purchase). In this environment, the concept of real time quickly becomes just noise, distraction, and many other negative things--rendering real time to just often being a pretty bad idea. 


Making Web Concepts and Specs Present As Real Time Help In API Design Tooling

I took the Github repository for Erik Wilde's (@dret) Web Concepts work and forked it, then generated some JSON which I could use to import into my API monitoring system. I've been manually adding specs to my Tweet and LinkedIn scheduling system, but I keep forgetting to go back to the site and add more entries. So I wanted to go ahead and import all the concepts and specs, and schedule out the tweets and LinkedIn posts for everything, over the next couple months.

First I generated the JSON for the concepts:

Then I generated the JSON for the specs:

I left out the relationships between the concepts and specs, as I will just be linking to Web Concepts, and let people explore for themselves. As I was looking through the JSON I got me thinking about why these concepts and specs aren't available in API design tooling, as helpers and tooltips, so that API designers and architects can learn from them and be reminded in real time--as they are crafting their APIs. 

It seems like there should be autocomplete for HTTP header fields, and for HTTP status codes, and other relevant items as they are needed. There is a wealth of web literacy available in Erik's work, and across the web concepts and specs he has organized, it seems like these should be available by default within API design services and tooling, and start being baked into IDEs like Atom, Eclipse, and Visual Studio--maybe they already are, and I'm just unaware.


Fine tuning My Real Time For Maximum Efficiency

I am working hard to fine tune my world after coming back from the wilderness this summer. Now that I'm back I am putting a lot of thought into how I can optimize for efficiency, as well as for my own happiness. As I fire back up the old API Evangelist machine, I'm evaluating every concept in play, a process being used, and tool in production, and evaluate how it benefits me or creates friction in my world.

During the next evolution of API Evangelist, I am looking to maximize operations, while also helping to ensure that I do not burn out again (5 years was a long time). While hiking on the trail I thought A LOT about what is real time, and upon my return, I've been applying this to reverse engineering what is real time in my world, and fine tuning it for maximum efficiency and helping me achieve my objectives.

As I had all the moving parts of real time spread out across my workbench, one thing I noticed was the emotional hooks it likes to employ. When I read a Tweet that I didn't agree with, or read a blog post that needed a rebuttal, or a slack conversation that @mentioned me--I felt like I needed to reply. When in reality, there is no reason to reply to real time events, in real time. This is what it wants, not always something you want.

I wanted to better understand this element of my real time world, so I reassembled everything and set back into motion--this time I put a delay switch on ALL responses to real time events across all my channels. No matter how badly I wanted, I was forbidden to response within 48 hours to anything. It was hard at first, but I quickly began to see some interesting efficiency gains and a better overall psychological well-being.

Facebook, Twitter, Github, and Slack all were turned off and only allowed to be turned on a couple times a day. I could write a response to a blog post, but I wouldn't be allowed to post it for at least two days. I actually built this delay switch into my world, as a sort of scheduling system for my platform, which allows me to publish blog posts, Tweets, Github commits, and other pushes that were often real time, using a master schedule.

After a couple of weeks my world feels more like I have several puppets on strings, and performing from a semi-scripted play. Where before it felt the other way around, that I was a puppet on other people's strings, performing in a play I've never seen a script for.


The Real Time Device Software Update Certification Chain

Each device will push to cloud, or to intermediary some sort of credentials that latest vulnerability update was pushed.


Working To Avoid The Drowning Effects Of Real Time

One thing I'm experiencing as I come out of my Drone Recovery project is the drowning effects of our real-time worlds. I am talking about the desire to stay connected in this Internet age, and subscribe to as many possible available channels (ie. Facebook, Twitter, LinkedIn, RSS, etc.), and more importantly the tuning in, and responding to these channels in real time.

You hear a lot of talk about information overload, but I don't feel the amount of information is the problem. For me, the problem comes in with the emotional investment demanded by real-time, and the ultimate toll it can take on your productivity, or just general happiness and well-being. You can see this play out in everything from expectations that you should respond to emails, all the way to social network memes getting your attention when it comes to the election, or for me personally, the concerns around security and privacy using technology.

The problem isn't the amount of information, it is the emotional toll of real-time. I can keep up with the volume of information, it's once I start paying the toll fee associated with each item, that it begins to add up. I feel the toll fee is higher in the real-time lane than when you do on your own schedule. The people who demand I respond to emails, and be first to the story have skin in the game, and will be collecting a portion of the toll fee, so it is in their best interest to push you to be real time.

Sure, there are some items that will be perishable in all of this. I am not applying this line of thinking across the board, but I am prioritizing things with this in mind. In an increasingly digital world, the demands on our time are only going to increase. To help me to keep from drowning, I'm going to get more critical about what I accept into my world in a real time way. My goal is to limit the emotional toll I pay, and maximize my ability to focus on the big picture when it comes to how technology, and specifically APIs are impacting our world.


Making Scientific Research More Real Time And Collaborative Using APIs

I had heard about the Zika virus research that was going on at the University of Wisconsin listening to an NPR episode this last spring. I finally had the time to dig into the topic a little more, and learn more about where the research is at, and some of the software behind the sharing and collaboration around the research.

The urgency in getting the raw data and results of the research out to the wider scientific community caught my attention and the potential for applying API related approaches seems pretty huge. When it comes to mission-critical research that could impact thousands or millions of people, it seems like there should be a whole suite of open tooling that people can employ to ensure sharing and collaboration are real time and frictionless. 

As I dug into the Zika virus research, I was happy to find the LabKey technology employed to publish the research. I do not know much about them yet, but I was happy to see the open source community solution, developer resources including a web API for integrating with research that is published using the platform. There are plenty of improvements I'd like to see added to the API and developer efforts, but it is a damn good start when it comes to making important scientific research much more shareable and collaborative. 

I'll spend more time learning more about what LabKey currently offers, and then I'll work to establish some sort of next steps blueprint that would employ other modern API approaches to help ensure important research can be made more real-time, aggregated, interoperable, and shareable using technology like definitions, Webhooks, iPaaS, and other common areas of a modern API effort.

When it comes to research, scientists should have a wealth of open tooling and resources that make their work a collaborative and shareable process by default, but with as much control as they desire--something modern web API solutions excel at. I added LabKey to a new research area dedicated to science. I will spend more time going through space, and see what guides, blueprints, and other resources I can pull together to assist researchers in their API journey.


Moving Cellular Towers In Real Time Response To Where Cellular Customers Are #DesignFiction

The age of the mountain top and building cellular tower is coming to a close. Our real-time cellar drone algorithm is now in active use in over 3300 telco fleets around the globe. After studying the daily activity of over 100 million cellular customers were able to assemble a reliable algorithm that would guide fleets of cellular network drones to where they are most needed in real time. 

It just didn't make sense to have cellular network towers be stationary anymore when our mobile users are well...mobile. We needed our cellular network to move, expand, and grow along with our user base. Thanks to advances in drone technology our network of thousands of drones deploy, hover, migrate, and return home in real time response to demands.

Our initial implementations were all successful due to the hard work, and quality of our algorithm, but with the data we are now receiving from the 3000 drone fleets around the globe, our new Swarm Tower (TM) technology is insanely precise. Rarely do we ever see a network overload or a mobile customer who cannot get the bandwidth (they are willing to pay for) they need at any given moment.

What has really surprised us is the use of the network beyond mobile. Early on 90% of the network connections were consumer and business smartphone devices, where now over 60% of the network capacity being other consumer, commercial and industrial devices. Expanding our mobile networks to meet this demand would never have been possible using traditional cell tower technology.

If you are as excited about Swarm Tower technology as we are, you will be interested in our Q2 announcements around secondary drone activities while they are supporting cellular network activity -- things like surveillance, weather, and other common activities.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.