Home            Blog

Thursday, May 25, 2017

10 HTTPS Implementation Mistakes

10 HTTPS Implementation Mistakes - SEMrush Study

Elena Terenteva
This post is in English
10 HTTPS Implementation Mistakes - SEMrush Study
Moving your website to HTTPS is a not a nice SEO bonus or prerogative of a big business, but it is a must for all kinds of websites. The volume of encrypted traffic is growing year after year and, according to Firefox telemetry, on January 29, 2017, half of all Internet traffic was secure, and that is a big deal.
The significance of this tipping point really can’t be overstated.
Ross Schulman, co-director of the New America Foundation’s cybersecurity initiative (Source).
If your website is still on the ‘dark side,’ you should reconsider your perception of encrypted traffic. In our previous article we talked about HTTPS’ influence and importance: it’s a heavy ranking signal, it’s a trust signal increasing  users’ credibility, and finally, it’s a guaranteed way to protect your website data from certain types of attacks.
Today we are going to talk about mistakes that can occur during HTTPS implementation and ways to fix and avoid them, so if you have already moved your website to HTTPS or are just thinking about it, this article will help you to avoid some of the most common pitfalls.
HTTPS Implementation mistakes

HTTPS Implementation with SEMrush

Is your website secure?

HTTPS implementation mistakes

All statistical data for this article was obtained during research conducted using the SEMrush Site Audit tool. We collected anonymous data on 100,000 websites in order to find out the frequency of  HTTPS Implementation mistakes. First of all, we should say that only 45% of the websites we analyzed support HTTPS and all data on the frequency of HTTS-related errors was collected during the analysis of those secure domains.
Google has very clearly specified HTTPS pitfalls which may occur and should be avoided. Now let’s take a closer look to each one and thoroughly examine ways that these errors can occur.  

Non-secure Pages with Password Inputs

Beginning in January 2017 (Chrome 56), we’ll mark HTTP pages that collect passwords or credit cards as non-secure, as part of a long-term plan to mark all HTTP sites as non-secure
Google Security Blog - Moving towards a more secure web
To identify the frequency of this error, we analyzed all 100,000 domains, because Google has strict requirements about ‘non-secure’ pages — any page that collects passwords should be encrypted. We hope that this initiative will facilitate the expansion of HTTPS. But for now, 9% of analyzed websites still have insecure pages a with password input.

Website Architecture Issues

Mixed Content

Mixed content occurs when your page is loading over secure HTTPS connections, but it contains elements (such as images, links, IFrames, scripts, etc.) that are not secured with HTTPS.
First of all, this may lead to security issues. Moreover, browsers will warn users about loading insecure content, and this may negatively affect user experience and reduce users’ confidence in your website.
And the extent of this problem is greater than you might think — 50% of websites have this problem. The thing is, manually evaluating this issue is very time-consuming — because one site can contain hundreds of pages, so this makes a mixed content error a real problem.

Internal Links on an HTTPS Site Leading to HTTP Pages

All internal website links, images, scripts, etc. should point to HTTPS versions. This is extremely important, especially if there are no redirects or HSTS implemented. Still, it is better to change links to their HTTPS version even if redirects are implemented. This is also one of the errors that can occur when moving a website to HTTPS. And, it seems like it’s the biggest problem, because it’s also time-consuming due to the amount of pages that need to be analyzed — for 50% of the websites we analyzed face this pitfall.

No Redirects or Canonicals to HTTPS URLs From HTTP Versions

When moving your site from HTTP to HTTPS, it is important to appropriately redirect canonical pages. This is important for several reasons — first, for supporting stable secure website experience, that is obvious. Second, not connected HTTP to HTTPS pages’ coexistence doesn't impede your SEO. Search engines are not able to figure out which page to index and which one to prioritize in search results. As a result, you may experience a lot of problems, including pages competing with each other, traffic loss and poor placement in search results.
Properly implemented redirects or canonicalization can improve a website's positions by combining all the signals.
This problem is not detrimental to websites using HSTS, because it is preventing web browser communication over HTTP, so we didn’t take them into account during our research. We have discovered that on 8% of the websites we analyzed (excluding ones supporting HSTS) HTTP home page is not corresponding to HTTPS version. And keep in mind, we are just talking about home pages here; can you imagine how many pages on the rest of these websites have not been properly redirected?

HTTP URLs in the sitemap.xml for HTTPS Site

Again, this mistake can easily occur when moving a website to HTTPS.
To prevent Google from incorrectly making the HTTP page canonical, you should avoid the following practices: including the HTTP page in your sitemap or hreflang entries rather than the HTTPS version.
Although this seems to be a clearly described requirement, 5.5% of websites have this mistake. When moving your website to HTTPS, you don’t need to create another HTTPS sitemap.xml file; just change the HTTPS protocol in the sitemap.
To learn how to properly migrate your site to HTTPS, check out this guide —  All you need to know for moving to HTTPS by Fili Wiese.  

Security Certificate Mistakes

Expired SSL Certificate

An SSL certificate (Secure Socket Layer certificate) is used to establish a secure connection between a server and a browser and to protect data on your website from being stolen. For some types of businesses that work with confidential data, like customers’ credit card and social security numbers, an expired SSL certificate brings the risk of credibility losses. Also, an expired certificate triggers a warning message for your users once they enter your website which will negatively affect your bounce rate. During our research, we found out that 2% of the analyzed websites have expired SSL certificates.

SSL Certificate Registered to an Incorrect Domain Name

This error occurs when the domain name to which your SSL certificate is issued doesn’t match the domain name displayed in the address bar.  This mismatch mistake appeared on 6% of the analyzed websites.
The higher frequency of this error, compared to the previous one, can be explained by the misconception that an SSL certificate issued only to the root domain (example.com) works for subdomains (info.example.com). This mistake can occur even if the certificate is installed correctly. For example, if a website’s SSL certificate is issued for www.example.com, entering example.com user will get to the website but receive an error notification.
This problem can be solved by using a Multi-Domain certificate, which allows you to use one certificate for multiple domain names or IP addresses. Note that unqualified names (www), local names (localhost), or private IP addresses violate the certificate's specification.

Server Issues

No HTTP Strict Transport Security (HSTS) Server Support

The HSTS protocol informs web browsers that they can communicate with servers only through secured HTTPS connections. Let’s say user typed in the address bar name of your website like  http://example.com, but HSTS instruct browser to use HTTPS version.
HSTS is a protection from downgrade attacks and cookie hijacking. This is a way to secure users from a man-in-the-middle attack.
A man-in-the-middle attacker attempts to intercept traffic from a victim user using an invalid certificate and hopes the user will accept the bad certificate. HSTS does not allow a user to override the invalid certificate message 
86% of analyzed websites don’t support HSTS. And it’s no surprise — the technology is quite new and browsers have only started to maintain it quite recent. Hopefully, in the next year we'll see a different picture with positive trend.

Old Security Protocol Version (TLS 1.0 or older)

Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols, which provide a secure connection between a website and browser, must be regularly updated to the new strong versions — 1.1 or higher. There's no discussion — this is a must. An outdated version of a protocol makes it very easy for rogues to steal your data. It’s one of the critical error, nonetheless it appears on 3,6% of the analyzed websites. This means that even companies that care about timely SSL certificate prolongation can forget about updating their protocol versions. So don’t forget to check your website’s current state.

No Server Name Indication (SNI) Support

Server Name Indication (SNI) is an extension to the Transport Layer Security (TLS) protocol, and it allows you to support multiple servers and host multiple certificates at the same IP address.
SNI usage solving the problem we talked about previously - SSL certificate registered to an incorrect domain name. Let’s say you added a new subdomain, entering it your user will get a warning about insecure connection, because SSL certificate is issued to the the different domain name. And it’s difficult, or better say impossible, to foresee all possible names. So here comes SNI, which will prevent occurrence of this error.
It’s not a strict requirement, which is probably why SNI-related errors were discovered on just 0.56% of the websites we analyzed.

About the SEMrush HTTPS Implementation Report

All the mistakes we've been discussing can be detected by the SEMrush HTTPS Implementation report — a new report available via the SEMrush Site Audit tool. We want to add couple words about the technical realization of this report and the way it can detect you all HTTPS pitfalls.
When detecting errors related to an expired SSL certificate, the SEMrush HTTPS Implementation report doesn't just show you the certificate's expired status, but the date it expired. Moreover, it can help prevent this problem by sending a notification about an upcoming certificate expiration.
certificate's expired status
If a certificate is registered to an incorrect domain name, the report will show the subdomain the certificate is issued for, which will help to quickly discover the problem.
Tanking about server-related issues: report, will provide full information about exact subdomain, which need an upgrade of security protocol (specifying the current version) or implementation of HSTS and SNI support.
Server related mistakes
Speaking of website-architecture-related issues, one the most interesting checks in the report about mixed content detected on a page. The report will find any type of the detected HTTP element, which we extract from tag element. It means that report is available to find and specify literally any insecure element. Considering how time-consuming can me mixed content exploration, this report will definitely become a great helper.
Mixed content
There is also a severity level mark for all errors, which will help you set priorities and work with the most dangerous issues first, then move on to the less important ones.
SEMrush Site Audit
So we can say that these newest implementations, plus the high crawling speed, the 50 additional on-page and technical SEO checks and the friendly interface make the SEMrush Site Audit tool one of the most powerful website auditors available on the market and definitely the best one among SEO suites.
So what do you think? Share your thought about our new report and let us know what HTTPS errors have given you the most trouble as well as how you overcame them.


Friday, May 19, 2017

Your Company Blog Is Still Just as (or More) Powerful Than Snapchat, Instagram and Facebook

Today, it can be easy to disregard something like blogging as un-sexy and outdated in terms of being a viable channel to market your business. Plus, with a new social media platform arising virtually every year, neglecting blogging is easier now than ever before.
Despite this, having an active, curated company blog is just as important today as it was 10 years ago (if not more important).
In this article, you will discover a handful of reasons why you should still blog even if you’re killing it on much newer, hipper social media platforms. You’ll also learn four strategies on how to maximize blog traffic and the influence of your blog.

Why You Should Still Write a Blog

1. Longer Lifespan of Content

For the most part, Instagram and Snapchat content doesn’t show up in Google’s search results. On top of that, the lifespan of a Snap is a mere 24 hours. On Instagram, posts are lucky to stay relevant longer than 13 hours.
The short life cycle of this social media content is certainly a double-edged sword: the fact platforms like Twitter, Instagram, and Snapchat are so real-time means they facilitate the fluid exchange of ideas and information. All this makes them timely and relevant, thus powerful tools.
On the flip side, it also makes content on the same platforms fleeting, causing it to be difficult for businesses to keep up with the “digital Joneses” when it comes to social media. It is in the best interest of these social media platforms to ask for more content. It is in the best interest of Google to ask for the best content.
By having a company blog, customers could potentially be reeled in years down the road, all with a single piece of high quality content.

2. Impact on SEO

Image Credit: VerticalResponse.com
Despite the amount of media attention given to Instagram, Snapchat, Facebook and company, you shouldn’t forget that there are approximately 3.5 billion Google searches conducted every single day.
Comparatively, Facebook sees an estimated 2 billion searches per day, and Pinterest approximately 2 billion searches per month. Instagram’s total monthly user count just recently reached one-fifth of Google’s number of daily searches, and Snapchat is even further behind.
It’s clear Google is still the world’s top search engine. In order to get the most out of Google, you should be taking SEO (search engine optimization) into close consideration. When it comes to SEO, writing quality blog posts is a terrific way for your company to climb up the search rankings.
The higher your SEO, the higher your probability of landing a client who stumbles across your work through Google in the future.

3. Ownership

Time and again, history has proven that the relevance of social media networks is a hard thing to predict. Just remember, Myspace was king from 2005 to 2008, Tumblr was popular from 2007 to 2013, and Vine was hottest from 2013 to 2015.
Instagram, Snapchat, Facebook, and the rest are all terrific platforms to use in order to garner the attention of your audience and to grow an audience, but using them as your company’s “home base” could prove unwise.
Blogs are much different. They are yours and only yours. No one else can take them away from you…well, except GoDaddy or Google Domains, but you get the point.

Successful Blogging Strategies

Now that you know a handful of reasons why blogging is still an indispensable marketing strategy, the real work begins. To help you, here are some tips and best practices to use to make sure your blog is getting the maximum exposure it deserves.

Leverage Platforms Like the One You’re Reading on Right Now — Medium ;)

It’s no surprise that today’s online landscape is saturated, and the blogosphere is no different. That’s why fresh platforms like Medium are such a valuable asset to have to increase the traffic and influence of your company blog.
Medium provides the perfect venue to showcase your own blog because it is (more or less) a blog in itself. It is a channel designed specifically for written content.
Additionally, Medium provides users with something they can’t automatically get from a standalone blog: a built-in audience of more than 30 million monthly users. Medium gives users the scale that would otherwise take years to build and nurture with a standalone blog.
Ready to get started? Here are some pointers:
  1. Read Quincy Larson’s article, which analyzes the best practices of the top 252 Medium articles in 2016.
  2. Use tools like Rabbut and Upscribe to seamlessly capture your reader’s emails, preferably after offering them a freebie (ebook, video course, etc.) in exchange for their email address.
  3. Use Medium to republish content from your company blog, and be sure to include a link to the original post so readers can stay in touch with you.
  4. Try your very best to get published on a large, relevant publication. To do this, reach out to the respective editor (via email or Twitter) with a link to your 100% completed article. Include a quick pitch going over why your content would be a great fit for the publication.

Don’t Be Afraid to Pay

Don’t be too shy to pay to promote your blog in the form of social media advertising (via Facebook ads, LinkedIn ads, and Twitter ad, etc.).
Social media moguls like Gary Vaynerchuk think Facebook advertising is single most valuable commodity in the online marketing world today, so it could definitely be worth your while to put some dollars behind the medium.
To get started, watch this short tutorial video covering how to create and manage Facebook Ads.

Use Quora

Neil Patel, digital marketing expert and founder of four multi-million dollar companies, preaches to his audience to search on Quora to discover which questions are being asked most often in your niche. You can find these questions in the Top FAQ section of the website.
After you find a question you like, write an answer to it in the form of a blog post. This will increase the likelihood others will search for and see the post, find it valuable, and come back for more.


You never can go wrong with networking, and tools like Meetup.com, Facebook Groups, and LinkedIn Groups make meeting like minded people easier now than ever before. Set aside an hour to search for groups and meetups in your niche.

If you enjoyed reading this post, please recommend and share it to help others find it!

Call to Action

If you really enjoyed this article and want to receive the shortened, PDF version of The 7 Mindset Shifts Needed for Successful Social Media Marketingthen click here to receive it now!


Tuesday, May 2, 2017

Professional and Business Online Presence #OLP

There is only one thing in life worse than being talked about and that is not being talked about- Oscar Wilde

Today, everyone wants to make business decision quickly without needing to have meetings or walk into your store, as meetings and traffic are time consuming. The place where the majority of people will learn about you and/or your business will be online.
This is a positive direction.
Because if someone can find everything they need about you that gives them the confidence to purchase a product or set a meeting, the faster the decision comes to actually hit the Purchase, Submit or Email button. This happens before ever stepping foot in your building or meeting space. This is why it is important to care about your online presence.
Stop reading and take a few minutes to type your name or business name in Google, Bing, Yahoo, etc. search box. What links show up on pages on 1- 5? Are you even a result within the first 5 pages?
Great! Are they links that help you sell or build trust with customers to buy/do business with you.

No, or just not sure?

We are available to help you answer those questions, contact us for a free 30 minute consultation.
Today you can no longer just have a website. People need 5-10 trusted sources before deciding to buy or schedule a meeting. They want to see your reviews on Glassdoor, Google, Yelp, Facebook, or even BBB. They will look at your customer’s comments and reviews. They want to see that you are covered in key media outlets as a leader in the industry and much more.
So now—go back to those first 5 pages of search. Do they include links to your LinkedIn company page, an article in a trade journal, videos, trade shows, associations….? If not, you have a lot work to do.
Now many marketing firms will tell you to run ads to increase your SEO. But ask yourself before you do—when is the last time you really wanted to click on one of those paid ads? You might have clicked only because you needed information right away. But if you had other choices on page 1 to get the info, you will more than likely skip the ad.
We have nothing against running ads but you also have to do the hard stuff. Realistically to get great links on pages 1-5, you may need to hire a person or an agency to help you reach your goal of good organic results within a short time frame.

Here are 5 key items that you can do to help you improve your online presence quickly:

1- Hire someone to write content for your website, social media and trade media.
2- Hire someone to take photographs— because you are going to need a lot of them.
3- Think about topics for videos for Facebook, Instagram, YouTube, etc. Then get someone to create at least one a month for the next 6 months or so.
4- Get someone to send press release, product info, photos, videos, etc. to key media to earn high quality backlinks.
5- Make sure your website is current, looks clean and attractive, is mobile friendly, easy to use and is on the right server platform. – NSG uses AWS
There are always more things that you can do to improve your online presence. But if you do at least some basics task’s you will be much further ahead than your competition.
We know that your first meetings/engagements with potential partners/customers will be more successful when you are able to build trust from your online presence and you can show them that you are a company they want to do business with or even to buy a single product from.
To learn how NSG Consulting Inc can help: chase@nsgconsultinginc.com

Saturday, April 22, 2017

The Future of Serverless Compute

Key Takeaways

  • Serverless compute, or Functions-as-a-Service (FaaS), will be transformational in our industry - organizations that embrace it will have a competitive advantage because of the cost, labor and time-to-market advantages it brings
  • Many Serverless applications will combine FaaS functions with significant use of other vendor-provided services that provide pre-managed functionality and state management
  • Tooling will significantly improve, especially in the area of deployment and configuration
  • Patterns of good Serverless architecture will emerge - it's too soon to know what they are now
  • Organizations will need to embrace the ideas of 'true' DevOps and autonomous, self-sufficient, product teams to reap the full time-to-market benefits that Serverless can offer
It’s 2017 and the Serverless compute revolution is a little over two years old (do you hear the people sing?). This revolution is not coming with a bang, like Docker did, but with a steady swell. Amazon Web Services put out new Lambda features and products on a regular cadence, the otherbig vendors are releasing production-ready versions of their offerings one-by-one, and new open source projects are frequently joining the party.
As we approach the end of the early-adopter period, it’s an interesting exercise to put on our prediction goggles and contemplate where this movement is going next, how it’s getting there, and most importantly what changes we need from our organizations to support it. So, join me as we look at one possible future of Serverless compute.
Note to readers from the actual future! You’ll probably get a good kick out reading this. How far off was I? And how is 2020? Please send me a postcard!

A vision of Serverless capabilities


The last decade has seen the emergence, and then the meteoric rise, of cloud computing. Nine years ago, virtual public cloud servers were ‘toys’ for startups but in a relatively short time have become the de facto platform for large swaths of our industry as it considers any new deployment architecture.
Serverless compute, or Functions-as-a-Service (FaaS), is a more recent part of this massive change in how we consider ‘IT’. It is the natural evolution of our continuing desire to remove all baggage and infrastructural inventory from how we deliver applications to our customers.
A huge number of applications we develop consist of many small pieces of behavior. Each of those are given a small input set and informational context, will do some work for a few 10s or 100s of milliseconds, and finally may respond with a result and/or update the world around them. This is the sweet spot of Serverless compute.
We predict that many teams will embrace FaaS due to how easy, fast and cheap it makes deploying, managing and scaling the infrastructure necessary for this type of logic. We can structure our use of FaaS into various forms, including:
  • Complete back-end data pipelines consisting of a multitude of sequenced message processors
  • Synchronous services fronted by an HTTP API
  • Independent pieces of ‘glue’ code to provide custom operations logic for deployment, monitoring, etc.
  • Hybrid-systems consisting of traditional ‘always on’ servers directly invoking platform-scaled functions for more computationally intensive tasks.
Businesses that embrace FaaS will have a competitive advantage over those that don't because of the cost and time-to-market benefits it brings.

Managing Application State

One of the prerequisites of such a large adoption of FaaS is a solution, or set of solutions, for fast and simple approaches to state management. Serverless compute is a stateless paradigm. We cannot assume that any useful state exists in the immediate execution environment of our running functions between separate invocations. Some applications are fine with this restriction as it stands. For example, message-driven components that are purely transformational need no access to external state, and web service components that have liberal response-time requirements may be ok to connect to a remote database on each invocation. But for other applications this is insufficient.
One idea to solve this is a hybrid architecture that manages state in a different type of component than that executing our FaaS code. The most popular such hybrid is to front FaaS functions with other services provided by the cloud infrastructure. We already see this with context-specific components like API Gateway which provides HTTP routing, authorization, and throttling logic that we might typically see programmed in a typical web service, but defined instead through configuration. Amazon have also recently shown their hand in a more generic approach to state management with their Step Functions service, allowing teams to define applications in terms of configured state machines. The Step Functions service itself might not become a winner, but these kinds of codeless solutions are going to become very popular in general.
Where vendor services are insufficient, an alternative approach to a hybrid system is for teams to continue to develop long-lived components that track state. Those might be deployed within a CaaS (Containers-as-a-Service) or PaaS (Platform-as-a-Service) environment, and will work in concert with FaaS functions.
These hybrid systems combine logic in long-running components and per-request FaaS functions. An alternative is to focus all logic in FaaS functions, but to give those FaaS functions extremely fast retrieval and persistence of state beyond their immediate environment. A possible implementation of this would be to make sure that a particular FaaS function, or set of FaaS functions, have very low latency access to an external cache, like Redis. This could be provided by enabling a feature similar to Amazon’s same-zone placement groups. While such a solution would still incur more latency than memory - or disk-local state - many applications will find this solution acceptable.
The benefits of the hybrid approach are that frequently accessed state can stay in-environment with the logic using it, and that no complicated, and possibly expensive, network co-location of logic and external state are required. On the other hand, the benefits of a pure-FaaS approach are a more consistent programming model, plus a broader use of the scaling and operational benefits that Serverless brings. The current momentum suggests that the hybrid approach will win out, but we should keep our eyes open for placement group-enabled Lambdas, and the like.

Serverless collaboration services

Beyond orchestration and state management, we see the commoditization and service-ification of other components that traditionally we would develop or at least manage ourselves even within a cloud environment. For instance we may stop running our own MySQL database server on EC2 machines, and instead use Amazon’s RDS service, we may replace our self-managed Kafka message bus installation with Kinesis. Other infrastructural services include file systems and data warehouses while more application-oriented examples include authentication and speech analysis.
This trend will continue, reducing still further the amount of work we need to do to create or maintain our products. We can imagine more pre-built messaging logic (think of a kind of ‘Apache Camel as a Service’ built into Amazon Kinesis or SQS), and also further developments in generic machine learning services.
A fun idea here is that FaaS functions, due to their lightweight application model, can themselves be tightly bound to a service leading to ecosystems of FaaS functions calling services that themselves call other FaaS functions, and so on. This leads to ‘interesting’ problems with cascading errors, for which we need better monitoring tools, as discussed later in this article.

Beyond the data center

The vast majority of Serverless compute so far is on vendor platforms running in their data centers. It gives you an alternative to how you run your code but not where you run your code. An interesting new development from Amazon is to allow their customers to run Lambda functions in different locations, for instance in a CDN with Lambda@Edge, and even non-server locations, e.g. IoT devices with Greengrass. The reason for this is that Lambda is an extremely lightweight programming model that is inherently event driven and so it’s easy to use the same intellectual ideas and style of code in new locations. Lambda@Edge is a particularly interesting example since it provides an option for programmed customization in a location that never had it before.
Of course, a drawback to this is even deeper vendor lock-in! For those organizations that don’t want to use a 3rd party vendor but do want many of the benefits of Serverless compute they will be able to do this with an on-premise solution, just like Cloud Foundry has done for PaaS. Galactic FogIronFunctions and Fission, from Kubernetes, are early efforts in this area.

The tools and techniques we’ll need

As I wrote previously there are significant speed bumps, limitations and tradeoffs when using a Serverless approach. This is no free lunch. For the Serverless user base to grow beyond early adopters we need to fix or mitigate these concerns. Fortunately, there is good momentum in this area.

Deployment tooling

Deploying functions to Lambda using AWS’ standard tools can be complex and error-prone. Add in the use of API Gateway for Lambda functions that respond to HTTP requests and you have even more work to do for setup and configuration. The Serverless and ClaudiaJS open source projects have been pushing on deployment improvements for over a year, and AWS joined the party with SAM late in 2016. All these projects simplify the creation, configuration and deployment of Serverless applications by adding considerable automation on top of AWS’ standard tooling. But there is still plenty to do here. In the future two key activities will be heavily automated:
  1. Initial creation of an application and/or environment (e.g. both initial production environment, and temporary testing environments)
  2. Continuous Delivery / Deployment of multi-component applications
The first of these is important in order to more widely enable the ‘conception-to-production lead time’ advances that we’ve started seeing. Deploying a new Serverless application needs to be as easy as creating a new Github repo - fill a small number of fields, press a button, and have some system create everything you need to allow one-click deployment.
However, easy initial deployment is not sufficient. We also need good tools to support Continuous Delivery and Continuous Deployment of the type of hybrid application I mentioned earlier. This means we should be able to deploy a suite of Compute functions and CaaS / PaaS components, together with changes to any application-encapsulated services (e.g. configured http routes in an API Gateway, or a Dynamo table only used by a single ‘application’), in one click with zero downtime and trivial rollback capability. And furthermore, none of this should be intellectually complex to understand, nor need days of work to setup and maintain.
This is a tough call, but the tools I mentioned previously (together with hybrid tools like Terraform) are leading the way to solving these problems, and I fully expect them to be largely solved over the coming months and years.
This topic isn’t just about deploying code and configuring services, however. Various other operational concerns are involved. Security is a big one. Right now, getting your AWS credentials, roles and the like set up and maintained can be a hassle. AWS have a thorough security model, but we need tools to make it more developer-friendly.
In short, we need Developer UX as good as Auth0 have with their Webtask product, but for an ecosystem as vast (and as valuable) as AWS.

Monitoring, Logging and Debugging

Once our application is deployed we also need good solutions for monitoring and logging, and such tools are under active development right now by several organizations. Beyond assessing the behavior of just one component though, we also need good tools for tracing requests through an entire distributed system of multiple Serverless compute functions and supporting services. Amazon are starting to push in this area with X-Ray, but it’s very early days.
Debugging is also important. Programmers have rarely written code correctly for every scenario on first pass before now, and there’s no reason to believe that’s going to change. We rely on monitoring to assess problems in FaaS functions at development time, but that’s a stone-age tool of debugging.
When debugging traditional applications, we get a lot of support from IDEs in order to set breakpoints, step through code, etc. With modern Java-based IDEs you can attach to a remote process that’s already running, and perform these debugging operations at a distance. Since we will likely be doing a lot of development using cloud-deployed FaaS functions, expect in the future that your IDE will have similar behavior to connect to a running Serverless platform and interrogate the execution of individual functions. This will need collaboration from both tool and platform vendors, but it’s necessary if Serverless is going to gain widespread adoption. This does imply an amount of cloud-based development, but we’re likely going to need that anyway for testing...


Of all the Serverless tooling topics I’ve considered so far, the one that I think is least advanced is testing. It’s worth pointing out that Serverless does have some pretty hefty testing advantages over traditional solutions in that (a) with Serverless compute individual functions are ripe for unit testing and (b) with Serverless Services you have less code to write and therefore just simply less to test, at the unit level at least.
But this doesn’t solve the cross-component functional / integration / acceptance / ‘journey’ test problem. With Serverless compute our logic is spread out across a number of functions and services and so higher-level testing is even more important than with components using something closer to a monolithic approach. But how do we do this when we’re relying so much on execution on cloud infrastructure?
This is probably the most misty prediction for me. I suspect that what will happen is that cloud-based testing will become prevalent. This will occur partly because it will be much easier to deploy, monitor, and debug our Serverless apps than it is right now for the reasons I just described.
In other words, to run higher level tests we’ll deploy a portion of our ecosystem to the cloud and execute tests against components deployed there, rather than running against a system deployed on our own development machines. This has certain benefits:
  • the fidelity of executing cloud-deployed components is much higher than a local simulation.
  • we’ll be able to run higher-load / more-data-rich tests than we might otherwise.
  • testing components against production data sources (e.g. a pub-sub message bus, or a database) is much easier, although obviously we’ll need to be careful of capacity / security concerns.
This solution also has drawbacks though. First, cycle time to execute tests will likely increase due to both deployment concerns, and network latency between the test - which will still run locally - and the remotely-executing system-under-test. Second, we can’t run tests when disconnected from the internet (on a plane, etc.) Finally, since production and test environments will be so similarly deployed, we’ll also need to be very careful about not accidentally changing production when we meant to change test. If using AWS such safety may be implemented through tools like IAM roles, or using entirely different accounts for different types of environment.
Tests are not just about a binary fail-succeed - we also want to know how a test has failed. We should be able to debug the locally-running tests and the remote components they are testing, including being able to single-step a Lambda function running in AWS as it is responding to a test. And so all of the remote debugging, etc., tools I mentioned in the previous section will be needed for testing too, not just interactive development.
Note that I’m not implying from this that our development tools need to run in the cloud, nor that the tests themselves have to run in the cloud, although both of these will occur to a greater or lesser extent. I’m merely expressing that the system-under-test will only ever run in the cloud, rather than on a non-cloud environment.
Using Serverless as a test driving environment though can reap interestingly useful results. One example is ‘serverless-artillery’ - a load testing tool made up of running many AWS Lambdas in parallel to perform instant, cheap, and easy performance testing at scale.
It’s worth pointing out that we may, to some extent, dodge a bullet here. Traditional higher-level testing is actually becoming less important due to advances in techniques where we (a) test in production / use Monitoring-Driven-Development, (b) significantly reduce our mean-time-to-resolution (MTTR) and (c) embrace a Continuous Deployment mantra. For many Serverless apps extensive unit testing, extensive business-metric level production monitoring & alerting, and a dedicated approach to reducing MTTR and embracing Continuous Deployment will be a sufficient code validation strategy.

Architecture: many questions to answer

What does a well-formed Serverless Application look like? How does it evolve?
We’re seeing an increasing number of case studies of architectures where Serverless is being used effectively, but we haven’t yet seen something like a ‘pattern grouping’ for Serverless Apps. In the early 2000s we saw books like Fowler’s Patterns Of Enterprise Application Architecture, and Hohpe / Woolf’s Enterprise Integration Patterns. These books looked at a whole collection of projects and derived common architectural ideas useful across different domains.
Importantly these books looked to several elapsed years of experience of the underlying tools before making any unifying opinions. Serverless hasn’t even existed long enough as a technology to warrant such a book, but it’s getting closer and within a year or so we’ll start seeing some common, useful practices emerge (anyone that uses the term ‘best practice’ today when it comes to Serverless architecture needs be given a significant raised-eyebrow look).
Beyond application architecture (how serverless apps are built), we need to think of deployment architecture too (how serverless apps are operated). I already talked about deployment tools, but how do we use those tools? For instance:
  • What do terms like environments mean in this world? ‘Production’ seems less clear-cut than it used to be.
  • What does a side-by-side deployment of a stack and slowly moving traffic from one set of functions/service versions to a different set of functions/service versions (rolling deployment) look like?
  • Is there even such a thing as "blue-green" deployment in this world?
  • What does roll-back look like now?
  • How do we manage upgrading / rolling-back databases and other stateful components when we might have multiple different ‘production’ versions of code running in functions simultaneously?
  • What does a phoenix-server look like now when it comes to 3rd party services that you cannot burn down and redeploy for cleanliness?
Finally, what are useful migration patterns as we move from one architectural style to something that is, or includes, serverless components? In what ways can our architecture change in an evolutionary way?
Many of these yet-to-be-defined patterns (and anti-patterns) are not obvious, most clearly shown by our very nascent ideas of how best to manage state in Serverless systems. There will no doubt be some surprising and fascinating patterns that emerge.

How our organizations will change

While cost benefits are one of the drivers of Serverless, the most interesting advantage is the reduction of ‘conception-to-production lead time’. Serverless enables this reduction by giving ‘superpowers’ to the vast majority of us engineers who aren’t experts in both systems administration and distributed systems development. Those of us who are ‘merely’ skilled application developers are now able to deploy an entire MVP without having to write a single shell script, scale up a platform instance, or configure an nginx server. Earlier I mentioned that deployment tooling was still a work-in-progress, and so we don’t see this ‘simple MVP’ solution for all types of application right now. However, we do see it for simple web services, and even for other types of apps deploying a few Lambda functions is still often easier than managing operating system processes or containers.
Beyond the MVP we also see cycle-time reductions through the ability to redeploy applications without having to be concerned about chef/puppet-script maintenance, system patch levels, etc.
Serverless gives us the technical means to do this, but that’s not enough to actually realize such improvements in an organization. For that to happen companies need to grapple with, and embrace, the following.

‘True’ DevOps

DevOps has come to mean in many quarters ‘Technical Operations with the addition of techniques more often seen in development.’ While I’m all for increased automation and testing in system administration, that’s a tiny part of what Patrick Debois was thinking when he coined the term DevOps.
True DevOps instead is about a change in mindset, and a change in culture. It’s about having one team, working closely together, to develop and operate a product. It means collaboration rather than a negotiated work queue. It means developers performing support. It means tech ops folk getting involved with application architecture. It means, in other words, a merging of skill and responsibility.
Organizations won’t see the efficiency gains in Serverless if they have separated Development and Ops or ‘DevOps’ teams. If a developer codes an application but then needs someone outside of their immediate group to deploy the system, then their feedback gains are wiped out. If an operations engineer is not involved with the development of an application, they won’t be able to on-the-fly adapt a deployment model.
In other words, in the future the companies that have made the real gains from Serverless will be the ones who have embraced true DevOps.

Policy / access control changes

But even a change in team-by-team culture is not sufficient. Often times, in larger companies an enthusiastic team will come up against the brick wall of Organizational Policy. This might mean a lack of ability to deploy new systems without external approval. It might mean data access restrictions to all but existing applications. It might mean ultra-strict spending controls.
While I’m not advocating companies throw all their security and cost concerns out of the window, to make the most of Serverless they are going to need to adapt their policies to allow teams to change their operational requirements without needing team-external human approval for every single update. Access control policies need to be set up not just for the now, but what might be. Teams need to be given budgetary freedom within a certain allocation. And most of all experiments should be given as much of a free-reign sandbox as possible while still protecting the truly sensitive needs of an organization.
Access control tooling is improving, through use of IAM roles and multiple AWS accounts, as I mentioned earlier. However, it is still not simple, and is ripe for better automation. Similarly, rudimentary budget control exists for Serverless, again mostly through multiple accounts, but we need easier control of per-team execution limits, and of different execution limits for different environments.
The good news is that all of this is possible through advances in access control tooling, and we’ll see more progress in patterns of budget allocation, etc., as Serverless tools continue to advance. In fact, I think automation of access and cost controls will become ‘the new shell scripting’ - in other words when teams think of the operational concerns of their software they won't think of start/stop scripts, patch levels and disk usage - instead they'll think of precisely what data access they'll need and what budget they require. Because teams will be thinking about this so often engineers will automate the heck out of it, just like we did with deployment before.
Given this ability and rigor, in the future, even for the most data-sensitive enterprises, passionately experimental teams will use Serverless technologies to try out ideas that would never have made it past the whiteboard before, and will do so knowing that they are protected from doing any real intellectual or financial damage.

Product ownership

Another shift we’ve seen on many effective engineering teams over the last few years is a change of focus from projects to products. Structurally this often comes via less focus on project roadmaps, iterations and burndown charts, and instead more on a kanban-style process, lightweight estimates and continuous delivery. More importantly than the structural changes though are the shifts in role and mindset to more overlapping responsibilities, similarly to what we see with (true) DevOps.
For instance, it is very likely now that product owners and developers will collaborate closely on the fleshing out of new ideas - developers will prototype something, and product owners may dig into some technical data analysis, before locking down a final production design. And similarly, the spark of innovation - where a new idea or concept comes into someone’s head - could belong to anyone on the team. Many members of the team, not just one, now embrace the idea of customer affinity.
A Serverless approach offers a key benefit to those teams embracing a whole-team product mindset. When anyone on the team can come up with an idea and quickly implement a prototype for it a new mode of innovation is possible. Now Lean Startup-style experimentation becomes the default way of thinking, not something reserved for ‘hack days’, because the cost and time of doing so is massively reduced.
Another way of looking at this is that teams that don’t embrace a whole-team product mindset are likely to miss out on this key benefit. If teams aren’t encouraged to think beyond a project structure, it’s hard for them to make as much use of the accelerated delivery possibilities that Serverless brings.


Serverless is a relatively new concept in software architecture, but is one that is very likely to have an impact as large as other cloud computing innovations. Through technology advances, tooling improvements and shared learning in Serverless application architecture, many engineering teams will have the building blocks they need to accelerate, and even transform, how they do product development. The companies that adopt Serverless, and adapt their culture to support it, are the ones that will lead us into the future.


Thanks to the following for their input into this article: John Chapin, Chris Stevenson, Badri Janakiraman, Ben Rady, Ben Kehoe, Nat Pryce.

About the Author

Mike Roberts is an engineering leader and cofounder of Symphonia, a serverless and cloud technology consultancy. Mike is a long-time proponent of Agile and DevOps values and is excited by the role that cloud technologies have played in enabling such values for many high-functioning software teams. He sees serverless as the next technological evolution of cloud systems and as such is optimistic about their ability to help teams be awesome. Mike can be reached at mike@symphonia.io and tweets at @mikebroberts.