How to Set Yourself Up to Thrive in the Knowledge Economy

Hey, social change agents!

Now that I've spent the time training my brain to concentrate, doing something just past my ability to understand or implement, and consolidating my learning, I move on to the Analysis phase. This is the the point where I pick up on the ideas generated in the experimentation phase and develop them fully by integrating them with what I already know. Let's get analytic!

Level 4: Analysis

In effect, I analyze the new subject or idea in a wider context, constructing mental systems that try to get to the implications of this new knowledge. This is the starting point of building my own personal and unique career capital as one who can work creatively with intelligent machines AND one who is a star in my field.

Action / Inputs

The point of working the new knowledge into my established systems of knowledge is to construct new frameworks of understanding and implementation, the core outcomes of the the learning goals discussed in Level 2: Experimentation and Level 3: Comprehension. In the process of constructing these new frameworks, my primary goal is to judge internal consistency with existing knowledge that has already been validated as effective in practice and in theory. As in the the Comprehension phase, I can construct a conceptual minimum viable product (MVP) to demonstrate this internal consistency. Using The Lean Startup vocabulary, this would be considered using the conceptual MVP to reach validated learning in the market (of established systems of implementation or understanding). I do this in two ways:

  1. Testing the MVP against many markets: By describing the conceptual MVP to general audiences (as I am with this series of blog posts), I can judge whether a new framework is general enough to validate that it can be applied universally in practice. In social sciences, this is the process of crafting a hypothesis and using surveys and experiments on the general public to validate that hypothesis.
  2. Testing the MVP against one expert market: By comparing my conceptual MVP to established frameworks constructed by experts (as I will do by emailing my 'personal advisory board' following the the completion of this series of blog posts), I can judge whether a new framework is consistent with already validated theory. In social sciences (or in any sciences), this is the process of peer review of ideas to validate that the new framework's hypothesis is both in line with validated theory and a logical evolution of that theory.

These two testing scenarios provide for me a test in theory and in practice in line with my aforementioned learning goals rewritten here for your convenience:

  • Can I produce an easily understandable description of the concept in 1 sentence?
  • Can I cite at least 3 counter-arguments against a concept and provide internally-consistent answers to those arguments?
  • Can I produce a drawing that accurately addresses key tenants of the concept?
  • Can I accurately diagnose systems that are incorrectly using this concept or introduce this concept to systems through a custom implementation plan?
Outputs

Judging internal consistency can be quite a challenge. That's why I've created a set of activities I can conduct that will help me test my conceptual MVP in both theory and practice:

  • Creating a set of FAQs that could negate hypothesis and think through solutions: This exercise forces me to try to undermine my own arguments, thereby providing me insight into possible disagreements individuals might have to what I have to say. By trying to poke holes in my own arguments, I can put internal consistency to the test.
  • Free thought times to make new associations: In Deep Work, Cal Newport discusses productive meditation (trying to solve a specific professional problem mentally while your body is engaged physically) and walking in nature (an intrinsically interesting stimuli) as means of supporting focused, but free association periods for solving problems. These methods of structured free thought can provide an even steadier understanding of my conceptual MVP by rooting it in even more existing validated learning.
  • Creating worksheets and guides: This activity puts the implementation learning goals to the test. By teaching others and receiving feedback on those teachings, I can solidify my own understanding of my conceptual MVP. It forces me to 'show my work' in arriving to and using my conceptual MVP and is both a test of theory and a test of practice.
  • Proposed thought experiments: During free thought, I can try to invalidate my conceptual MVP by proposing scenarios where its basic tenants are called into question. If I can find gaps in my understanding or implementation of my conceptual MVP, I can create addenda that restore internal consistency, and thus, make my arguments stronger. If I find gaps that, over many sessions of productive meditation, I can't fill, that could be a sign that I need to rethink my framework or enlist others' brainpower to help me find a solution.
  • Book summaries re-imagined through the conceptual MVP's lens: This exercise combines both Level 3: Comprehension and Level 4: Analysis. It requires me to accurately understand and describe another's career capital (comprehension) and then re-interpret it it through my own MVP as a means of building my own career capital (Analysis). For example, Cal Newport mentions levels of depth in Deep Work, but he doesn't clearly define those levels, just that they exist. One exercise I might (and may!) undertake is to create a chapter by chapter book summary through the lens of my Roadmap for Deeper Work.
  • Soliciting opinions via submission to online publications, polls, newsletter, LinkedIn, and 'personal advisory board': Just as I needed to solicit opinions on my own understanding in Level 3: Comprehension, I need to do the same with my conceptual MVP in Level 4: Analysis. The communication tools listed can be used strategically to provide both the general and expert feedback I need to validate my conceptual MVP in theory and in practice. However, I have to be careful of certain network tools as they have been engineered to pry your attention away from your purpose and toward personally curated clickbait designed to keep you on their platform for as long as possible.
Metrics

The Analysis phase is judged by the strength of the conceptual MVP. To calculate that, I've compiled a few metrics that I can use to give me an informal assessment:

  • Quality of feedback rated from 1-5 (many markets): This metric would be similar to the same metric in comprehension. It requires an additional rubric for evaluation. I would judge the quality of the feedback based upon the % of times the critic provided feedback on my argument's structure or its logical conclusions. A block of feedback would be evaluated on its structural arguments ranked 1-5 where 1 is never noted and 5 is noted many times throughout. The same goes with the evaluation structure for logical conclusions. Both numbers would then be averaged to get an overall quality of feedback score. Both the structure and the logical conclusions are important to me to be able to ensure that my understanding is both communicated elegantly and consistent with my own set of core values (check them out here!). This metric primarily serves one the learning goal which requires me to know at least 3 counter-arguments against a concept and provide internally-consistent answers to those arguments.
  • Industry diversity of feedback (many markets): The industry diversity of feedback is helpful to understand the ability of the conceptual MVP to adapt to many different industries. It helps validate product-market fit with the target market.
  • Bounce-rate on conceptual MVP content (many markets): This is a simple metric that can be pulled using Google Analytics. The bounce-rate is the rate at which people visit that page and then leave the site without visiting other pages. This can be particularly relevant when promoting the content on social media or via paid advertising to see how compelling the conceptual MVP is overall.
  • Interaction rate of experts with conceptual MVP (expert market): The interaction rate would need to be calculated manually by tracking the emails and exchanges with experts in the conceptual MVP's field. This process, for example, would require tracking emails with productivity experts. The rate would be determined by the number of responses divided by the number of requests for responses. This is important because experts in productivity who, as described in Deep Work are likely not nearly as active on email, respond find the content compelling enough to provide an answer. To receive an answer can point to an intrinsic value of the content. As with other rates, it is important to establish the baseline, so I will calculate the rate with my first outreaches to productivity experts, but I won't assign a quality value as that first round is the baseline. With subsequent conceptual MVPs, I'll be able to evaluate the rate of interaction more meaningfully within the context of the baseline.
  • Average # of exchanges with experts (expert market): Again, because productivity experts are less likely to respond as frequently or at all on email, the number of exchanges is a metric that points to the value of what I am requesting feedback on. As with the above metric, I will need to establish a baseline to know what is 'good'.
  • Rate of exchanges with everyone (many markets): Pulling from my knowledge of online communities, I know that 90% of people on any given site are lurkers, those who do not communicate on the subject, but who read and consume the content. Further, 9% of the remaining 10% are people who will participate while the rest (1%) are content creators - those who remix the provided content and provide commentary. With that in mind, I can measure the rate of exchanges in a percentage and compare it to the ideal (10%). This is important to validate the conceptual MVP against many markets.
Blog Ending

Analysis is the point where one transforms from a content consumer to a content producer. It is where you test the new information against already validated hypotheses and your own personal world view. This level of depth speaks to the human (and Lean Startup!) feedback loop of learn - build - measure. As American futurist and writer, Alvin Toffler, notes: [caption id="attachment_677" align="aligncenter" width="640"]

Image credit: blog.tsemtulku.com

Image credit: blog.tsemtulku.com[/caption]

Make sure you don’t miss level 5 by signing up for mailing list to receive weekly roundups. I’ll also be modifying the chart as I test with others to make it more universally adaptable (which you’ll get first … if you’re on my mailing list that is!)