Growth thinking and experimentation in education: 3 key factors to get started

(A version of this article will also be published in Open LMS blog)

Growth thinking is a term primarily used in the context of startups and software companies in business environments. It is an approach to strategies to maximize Growth, typically measured in the number of users, and do it by spending as little budget and time as possible. This approach is something that institutions and organizations can use when looking to embark on revenue diversification initiatives or designing how to improve teaching and learning practices.

In this post, we’re going to take a look on some aspects to get started with growth thinking in the context of education from 2 perspectives: 

  1. How can we use growth thinking from an education business perspective? This could be useful in new revenue streams that could be under consideration, like an e-commerce project, In-company programs, micro-credentialing, etc.
  2. How can we use an experimental mindset and Growth mechanics in the context of academic effectiveness? Some sort of “instruction hacking”, if we may call it that way.

For this post’s context, I’m going to refer to learners/students/employees as users.

Map the journey

The first step is to think of and map all of the stages users go through to discover and experience value, whether from an online course offering or the learning outcomes from within an actual course.

Let’s think for a moment about a user that wants to join an engineering course via an e-commerce offering for an imaginary institution. Some questions we can ask are: How does this person get to know about the course? What happens after this person arrives at the course description page? What information is she visiting? What are the steps that this person needs to take to purchase the course? What about enrollment? When could we say this user is engaged in the course? When do we say this person was successful and that she could consider continuing her learning path on another course and maybe recommend the current one to others? 

Each step of this journey is something that we should aim to track, especially when a user shifts from one step to the next one, which is what we can call “conversion.” This tracking is particularly important because it allows us to start thinking about hypotheses and ideas that we can generate to reduce friction and improve outcomes on the path to experience value. We can begin to think about all the different levers to pull at each stage of the journey to make it better. 

Let’s see another example of his, now in the context of academic effectiveness. We can apply the same rationale about the journey but in this case, what we can map instead of the steps to purchase are these steps that this person takes in their learning experience. From there, we can think about how we can optimize each step. 

Let’s think of an example of a corporate university initiative. How can a user discover a course that could close a skill gap? How does the user get enrolled? How does the user discover the value of the course? How does the user navigate through course topics and participates on them? How does she engage with activities? How does she complete the course successfully?

How the journey looks would depend on your context, your users, and the processes already in place.

Approach to improvements with an experimental mindset

Mapping the experience provides us visibility on the things that we can start optimizing.: communications, removing friction in some steps, changes on how we present content, etc. 

Instead of relying just on our experiences or intuition about what could work, we can frame the improvements we want to make as experiments to test and validate if we are advancing in the right direction. This also benefits from limiting risks and upfront investment if we do it in a subset of our target audience.

A deep dive into the scientific method or statistical analysis is out of the scope of this post, but here are some things to keep in mind to plan your initiatives as experiments:

  • Create a hypothesis: This could come from observation, a market trend, a new best practice, etc. This is what you believe could have an effect on the target metric in your user’s journey. 
  • Communicate why: Map your hypothesis to a stage in the user journey you want to improve. Share why you think the result you expect matters; this will help justify the initiative with others in the team.
  • Identify how: Plan how you can test your hypothesis in a way that you could represent the entire target audience in a subgroup that could provide you proper insight. Frame the how in a way that also minimizes initial investment and risks. Remember that in some cases, you don’t need to build everything to test your assumptions. Prototypes, “Wizard of Oz” experiments, and other mechanisms could be your ally here.
  • Get the team onboard: Sharing why you’re doing experiments and the metrics you expect to improve is a great way to get the ball rolling with your team and get buy-in. But here’s another aspect that helps: Get them involved in the ideation process. Suppose you’re a training or teaching & learning leader. In that case, you can act as a facilitator in understanding why experimentation matters and get people excited about sharing their improvement ideas. Make sure you make them feel appreciated no matter the results of the experiment. 
  • Create an idea backlog: Act as a product manager of your experimentation pipeline and think of a selection criterion that works best for your organization. A popular framework used in Growth is the Impact, Confidence, and Ease model.
  • Save your results: My biggest recommendation in this area is to think about how you can document everything related to your experiments, from ideas to learnings. This is going to be helpful when you want to discuss findings with other members of your team. Documentation also helps when you want to provide results from your experiments as evidence of the choices you can make.
  • Consider investing in experimentation itself: From tools to education on statistics, as you prove the value of experiments in your institution or company, it makes sense to step up in the game..

Select appropriate tools

There are three big categories in which we can organize tools depending on the value they provide for your experimental framework. Let’s take a look at some examples.

Tools to organize your experimentation pipeline

The idea here is to set up a process that works for you, and do it in a way that provides quick visibility on the status of your running experiments, results, etc. My recommendation here is to start with the project management tool you are already using and familiar with—things like Basecamp, Asana, or Trello, for example.

If you want to go a bit further on tweaking your tool to match your exact process, you can think of tools like Air table or Coda that are flexible and provide more customization options. 

If you want to use something that was born from Growth Thinking or your focus will be mainly on the business side of Growth, check out the Growth hacking experiments platform.

Analytics tools

These tools can help you quantify the user journey you previously mapped and put numbers in the behaviors you can observe. In this space, you can start by looking at the number of users who arrive at a stage, the number of people who drop off or lose interest at a given moment, metrics about events, etc. An important consideration here is to be intentional about what you want to measure because you’ll have many options. Still, only some elements will provide the kind of insight that is going to be useful for your experiments and decisions. 

There are a lot of tools in this space, but here are some that I highly recommend:

  • Learning Analytics:  These are tools that will help you understand what’s happening in your courses, in your program, with your learners, your content, etc. Open LMS provides more than 40 reports out of the box to measure activity completion, course metrics, etc. You can use this information to conduct experiments about learning models, content effectiveness, etc. Depending on your institution’s needs, you can also expand your capabilities and consider tools like IntelliBoard or Watershed to bring additional insight or connect data with other sources of information.
  • Web/product analytics: If you want to track some aspects of your initiative from a wider perspective, you can consider adding web/product analytics solutions like Google Analytics or Mixpanel into the mix. These tools can help you understand traffic and user behavior across web properties such as your main website, the LMS, and other web services for learners, such as enrollment services. Such information is especially useful in revenue diversification initiatives such as e-commerce, in-company programs, etc. These tools don’t provide the level of detail that learning analytics tools provide on the learning experience but can complement your experiments or run them in other stages of the journey you mapped.
  • Visualization/recording tools: Maybe this is not precisely analytics, but visualization tools like Visual Web Optimizer or Hotjar that allow you to create heatmaps or do video recordings on user sessions can provide you with an additional layer of insight that could be very useful to understand usage and interactions. This could provide more clarity on questions like how do my users navigate the course? How are new users interacting with the platform? What are some ways we can help them get started?

Something essential to consider in this section is that any analytics tool that you want to use, or any other application that interacts with user information should be treated with extreme caution and respect for the user’s privacy. Be sure to work with your Data privacy team on how you can run your experiments in a way that works for the user and the organization.

“Lever” and experimentation tools

In this category, I want to highlight tools that you can use to implement your experiments at different levels.

  • An agile learning management system: Do you want to see if e-commerce could work best for a course than other channels? Do you want to check if people engage more in in-company programs with X brand customization vs Y? Want to see if people get better results if they receive frequent feedback or personalized reminders? Your LMS needs to support your experimentation intentions and provide the right capabilities to do so at scale. You want to think of something that not only works for 1 or 2 experiments but that also provides your team or even other audiences like instructors to run their own experiments and use your LMS as a framework to do it. For example, learn how XXX used PLD to automate feedback at scale. 
  • Onboarding and assistance tools: If your experiments are aiming towards things like engagement, onboarding, and assistance, you can check out tools like Appcues, Intercom, or Drift to create flows to help your users move from one stage in their path toward the next one. E.g., Helping instructors to develop a course, and helping students identify navigation. You can also use things like Open LMS user tours for this to start with.
  • Optimization, testing, and feature flags: Here, you can find tools like Google Optimize, Optimizely, Split IO, etc. These have been traditionally used on applications or marketing websites to rollout product features to a subset of users or do A/B testing. Still, these are also powerful tools that can be used in learning experiences. Think, for example, experimenting with different copy in course content or a different way to present the course online. There are also ways you can do this without the tools, like running 2 versions of the same course with different cohorts or using Open LMS’s conditional release options, so be sure that these are not overkill for your experiments. It all depends on your context.

I hope this post is helpful in your conversations around experimentation and Growth in your institution or company. I believe this is a crucial capability to develop for digital transformation and Innovation. I would love to hear your comments about whether or not you’re currently experimenting, the type of experiments you are using now, or what other tools and practices are working for you.

Leave a comment