Accurate Estimations

We’re constantly asked to give estimates:

How long will it take?
How much will it cost?

What time should we leave?
How much do you want?

Estimations are information about the unknown. We constantly use this information to make decisions: allocating resources, changing strategies, and choosing partners. But despite all this practice, we’re horrible at accurate estimations.

Why Estimations are Hard

Estimations are hard for both technical and social reasons.

We don’t know what we don’t know. Naive estimators fail to account for surprises. They estimate based on known factors and best-case scenarios. These estimates may be perfectly accurate beforehand, but they’re instantly broken by the first surprise.

Experienced estimators account for this problem by ‘adding some buffer’. But even then, how much should they add? Even knowing that surprises can happen, it’s impossible to how many will happen or the impact of those surprises. Choosing the right amount of buffer is a lot like making the right estimation in the first place. We still don’t know what we don’t know. And adding too much buffer can be as expensive as failing to account for surprise at all.

In addition to this technical problem, there’s a strong social problem. Let’s imagine two common scenarios.

In the first scenario, you just gave giving an estimate to your team. You perfectly estimate that a project will take three weeks; but your manager gives you a puzzled look. Your teammate snickers, claiming they could do it in a week, tops. The feeling is that you must be either lazy or incompetent to give such a padded estimation. You newly shortened estimate is wrong, so you go on to extend the project’s deadline twice in three weeks. Rather than holding you accountable to that one-week estimate, your manager commends you for being able to handle the unforeseen surprises on such a complicated project.

In the other scenario, you’re giving an estimate to a potential client. You perfectly estimate that a project will take three weeks; but your competitor only estimates it’ll take a week. The client signs with your competitor. Since their estimate was wrong, your competitor goes on to extend the deadline twice in three weeks. Even though your estimate was accurate, your client’s estimate got them paid.

I’ve been in countless scenarios like this. Sometimes people outright pressure us into shortening our estimations, and sometimes the voice in our heads push us to. Either way, giving accurate estimates is both technically hard and socially challenging [1].

Two Types of Estimators

In response to this hard problem, we become systematic underestimators or systematic overestimators.

Underestimators fail to give enough buffer. This strategy has two key benefits. First, it signals (unrealistically) high performance. Like our virtue-signalling teammate, we can underestimate ahead of time, then point at concrete surprises for our eventual underperformance. And like our overpromising competitor, we can underestimate during a bid and do whatever we want after the contract is signed. Second, tight estimates demand efficiency. Underestimators set deadlines that they and their teams must work hard to meet. Underestimation works well when the costs of going over-budget are small. But when those costs are large, underestimations lead to disasters. On the whole, underestimators systematically run the risk of being burnt out, past-deadline, and over-budget.

Overestimators are instead biased towards large buffers. Extreme overestimators might send you articles titled “Estimations are a Scam”, or claim that estimations are simply tools for worker exploitation. Overestimation works when the costs of buffer are low and the costs of going over budget are high. But overestimators are constantly taxed by Parkinson’s Law. Parkinson’s Law is the pattern where projects fill the time and resources they’re allocated, instead of the time and resources they need [2]. Rather than pushing towards peak performance, overestimators systematically move at a bored, leisurely pace. Overestimators are also demotivating. Rather than inspiring the team to reach competitive goals, they disparage those who do. So while underestimators run the risk of their teams burning out, overestimators run the risk of their teams shutting down.

To simplify the hard problem of estimations, we slowly become under- or over-estimators. We reap the systematic rewards and accept the systematic costs. These chosen strategies may work in many contexts. But for any simple strategy, there are worst-case scenarios where those systematic risks blow up. Underestimators blow up when the costs of going over budget skyrocket. Overestimators blow up when the costs of adding extra buffer skyrocket.

This line between underestimators and overestimators forms a classic “spectrum problem”. A spectrum problem occurs when we oversimplify the solution to a given tradeoff. In this case, we’re splitting the spectrum of estimation strategies in half. Underestimators fall on one side of the line, and overestimators fall on the other. With many spectrum problems, the better solution is to cut the spectrum into three pieces, choosing the middle strategy between extremes. In doing so, we acknowledge the costs and rewards of both sides, maximizing the upsides and minimizing the downsides systematically.

Playing Single-Pointed Darts

Imagine a game of darts with very simple rules. I throw my dart first, and you only score by hitting that same exact dart-sized point. This game is very simple, but so difficult that nobody would play it. To make darts playable, we specify scoring ranges. These same ranges are missing from our everyday estimations.

The most common estimate sounds something like: “I’ll have it ready by 5pm.” But this is just like playing single-point darts! Imagine that this estimate is perfectly precise: the project can’t be ready one second before or one second after 5pm. While impressive, there’s zero room for error. If we end up sick, or if the project ends up more complex than expected, our estimate becomes instantly wrong.

To account for surprises, we can add buffer. But adding buffer only shifts the dart board over a few inches. “I’ll have it ready by 5pm tomorrow” faces all the same problems as the first estimation. Two days of surprises still makes me wrong. We’re still playing single-pointed darts.

It’s a losing game. And yet we see it played again and again, day after day, project after project.

Playing Darts with Ranges

To enjoy this game of darts, we need to change the rules. Rather than giving a single-pointed estimate, we give two points: one for the best case, and one for the worst case. These two points create a range of targets to hit, just like the game of darts we know and love.

To demonstrate, I can give an example from my personal life. My girlfriend is very punctual, and I’m not. She’s a classic overestimator, giving as much buffer as we can afford. And I’m a classic underestimator, giving the most optimistic estimates. So whenever we have somewhere to be, and she asks me “what time should we be ready?”… it’s a classic estimation problem!

My preference would be to underestimate, and tell her the last minute we can leave without being late. Her preference would be for me to overestimate, telling her the soonest we can leave without being “too early”. But no matter what I say, we face all of the same problems stated above.

Recently, I’ve started giving her two answers. The first answer is “the green time”. The green time tells us when we should leave so that we’re pleasantly early. The second answer is “the red time”. The red time is the last minute we can leave, without definitely being late. If we can be ready by the green time without stress or shortcuts, that’s perfect. Once we pass the green time, we go into the “yellow zone.” The yellow zone is the buffer between early and late. There’s no need to take shortcuts or change plans yet, but we’re getting close. Once we approach the red line, we start discussing the need to take shortcuts, or to start telling others that we may be a little late.

Having a green time, a yellow zone, and a red time transforms our game of single-pointed darts into a proper dartboard, with zones of success. The green time makes my girlfriend happy: she can be ready early without worry. The red time makes me happy: I can fill my buffer time up with other activities. And the yellow zone is a signal for both of us to get focused or to start taking shortcuts [3].

Aside from the technical benefits, I can feel the difference in enjoyment between playing single-pointed darts vs playing scoring-ranged darts. Having a range makes inherent uncertainty explicit to the group. Adding buffer doesn’t just shift the dartboard a few inches, it expands our range of accuracy. Larger buffers signal more uncertainty and require less precision, while smaller buffers signal more confidence and require more precision. The expression of certainty and confidence isn’t possible to communicate with a single-pointed estimate.

By estimating with two numbers instead of one, a richness of information and strategies are made available.

Simple Changes

Using two numbers instead of one, these so-called “confidence intervals” aren’t complicated. Then why are they so absent from everyday life?

Although nothing prevented us from discovering them earlier, the first mention of confidence intervals in scientific literature wasn’t until 1937. And it wasn’t until the 1980s that they were required in scientific journals. So if it took forty years for the most knowledgeable people to apply a simple solution towards the most urgent problems, it’s not surprising that it’s taken the rest of us at least as long. That said, the goal of this essay is to speed up that process.

We can move towards accurate estimations by practicing two simple rules. The first: When giving an estimate, state the best- and worst-case answers. The second: Whenever getting an estimate, ask for the best- and worst-case answers.

“It’ll take two weeks” should raise a red flag. A better answer sounds like: “in the best case, one week; in the worst case, three”. Boom! Now we have a range of days to deliver the project, instead of one. Not only do we have a picture of the optimal scenario, we have an idea for the hidden risks lurking in the future.

As a bonus, we’re capable of using the “green line, yellow zone, red line” imagery to guide our actions [4]. The green line is how we strive for the best and signal our competence. The red line is our insurance policy in case of surprises. The yellow zone is where time is critical, and we need to coordinate in case our plans need to change.

In a world of uncertainty, this extra byte of information is enough to create more accurate thought and action.

Conclusion

In summary, everyday estimations are hard, and confidence intervals are a simple way to make them easier.

Estimations aren’t because people are stupid. There are compelling social and technical reasons why we fail to make accurate estimations. Given these reasons, we see two types of estimators. Underestimators love tight estimations, while overestimators love padded estimations. Over time and across scenarios, these estimators are systematically right and systematically wrong.

We can break out of this dichotomy by changing the way we give estimates. By estimating with two numbers instead of one, we open up a world of information and process improvements. This small change in how we think, how we talk, and how we work is a huge, missing leverage point for us to be more effective and more efficient over time.

Footnotes

[1] Though questionable, accurate estimations are worth the effort. First of all, accurate estimations build trust. After a few false promises, people catch on. Like compound interest, commitments founded on years of accrued trust are worth much more than short-term deals based on deceptive estimates. Secondly, inaccurate estimations kick off a vicious cycle of degrading quality. Underestimations lead to insufficient resources, insufficient resources lead to shortcuts, shortcuts lead to poor quality, and poor quality leads to greater consumption of resources. These teams work harder and harder to do less and less. Inversely, overestimations lead to too many resources, too many resources lead to inefficiency, inefficiency leads to overestimations. These teams take longer and longer to do less and less.

[2] Parkinson’s Law is the insight that projects tend to consume the resources given to them, be it time, money, or people. Fear of Parkinson’s Law motivates people to justify tight deadlines. They claim that without tight deadlines, people tend to get lazy or unfocused.

But if Parkinson’s Law was the whole story, we’d never see projects go over budget! But since we see projects go over deadline and budget all the time, adjusting deadlines can’t be the only factor. Instead, keeping deadlines as tight as possible leads to burnout, in addition to all the other costs of underestimating. Though useful for timeboxing projects in general, Parkinson’s Law should not be used as the sole guiding principle for estimations.

[3] In professional projects, the same concepts apply a bit differently. The green time is the earliest a project can be ready, satisfying the underestimators; and the red time would be the latest a project can be ready, satisfying the overestimators. Underestimators can still push for the green time, while overestimators still find peace of mind in knowing the red time exists.

[4] “One week, give or take two days” in a common way to convey a similar amount of information. I focused on the “best case / worst case” framing, because it opens the door to the green / yellow / red style of project management. That said, two answers are always better than one, so if adding a “give or take” is an easier way to make this transition in how we estimate, I’m all for it.

Organic Proposals

Proposing solutions to problems is a critical part of knowledge work. The efficiency and effectiveness of our proposals are key measures of our success. It’s something I had to learn the hard way.

By the end of my second year as a software engineer, I had only made one or two proposals to small problems assigned to me. As I got into my third year, I began proactively making proposals to open problems that I noticed.

At first, the proposal-making process was chaotic and painful. I would identify a problem, make its solution my passion project, and hammer away at a proposal for the team. The bigger the proposal was, the more I felt like a good engineer. I would proudly send my proposal to the team, ready to counter any criticism and push for my solution. I was confident that I was making a change for the better, and that it would be gratefully accepted by my team.

I was wrong. Those early proposals were consistently met with resistance: the bigger the proposal, the greater the resistance. The more energy I spent making a proposal, the more energy I needed to defend it. Given my naive confidence preceding each proposal, I was very confused and frustrated with this pattern. Thankfully, I had an experienced manager who I could talk to. He shared an idea with me that has greatly improved the efficiency and effectiveness of my proposals, and I’m happy to share that idea with you now.

Treating Ideas like Plants

The gist of the wisdom was this: evolve your proposal organically.

In practice, this looks something like:

  1. Identify the problem
  2. Propose solutions to the problem
  3. Gather feedback
  4. Start again at step (1)

My initial proposals didn’t grow organically. Instead, they grew in my secret homemade laboratory, where the problems were as big as I made them and the solutions were as clear as I could see them. Once the proposal was fully formed, I pushed it onto my team as the problem and the solution. This style put my team in an awkward spot. Maybe the problem was not as big as I thought it was, or maybe the solution was more complex than I could account for, or maybe it didn’t line up with our priorities at the time. Regardless, I was completely blind to all of this important context until I made the proposal. After I made the proposal, that context slammed into my proposal like a ton of bricks. These collisions hurt my confidence and taxed my relationships with teammates.

The organic approach doesn’t have the same hazardous momentum. Instead of identifying a problem and immediately working on the solution in secret, I would instead ask my team if they knew about the problem and whether any thought had been put into it yet. This is like planting the seed of a proposal. Maybe my teammates have an existing proposal for the problem that can bring me up to speed. Maybe there are agreed workarounds or planned solutions to the problem that weren’t clear to me before. Or maybe there is a shared desire for someone to tackle the problem head-on. In that case, the seed has been planted.

Given a planted seed, I need to nurture it within a small pot. The organic approach is to identify a few solutions to the problem and quickly sketch them out. Once I have a feel for the lay of the land, I can ask one or two thoughtfully selected teammates if they have any insight into the problem or what they think of those initial solutions. Their feedback is like a trickle of water on the seed. Each level of their involvement increases the strength of the seed until a clear direction is formed. Once we have something that can eventually turn into a solution, a sprout is formed.

Now that sprout needs more water and sun. I go back to my lab and flesh out the viable solutions until they hit some critical decision points or uncertainties. These decision points are like branches on our sprout. I don’t know which will be the strongest in the end, so I grow them in parallel until we need to choose one of them and cut the rest off. Maybe my favorite branch only looks good from where I’m standing, or maybe it only grew quickly in the beginning and won’t mature as nicely as another. Once the proposal starts looking healthy and attractive, our young plant is ready for prime-time.

With this healthy young plant in hand, I can more easily and confidently write a polished proposal that addresses all of the context and questions that arose during the seed and sprout phases. I’m no longer limited to my own awareness and preferences: I have my teammates’ awareness and preferences in mind as well. By the time the proposal is sent to the broader team, I should already have some support for it and some momentum behind it. Rather than artificially creating a proposal that my teammates are completely surprised by, I organically created a proposal that my teammates helped create. The gain in efficiency and effectiveness of such proposals has changed my career for the better.

Grow Organically, Die Organically

The advantage of the organic approach is not so much that my good ideas succeed eventually, it’s that my bad ideas fail early.

The problem with artificial systems is that the costs of wrong decisions are only faced after they get released into the world. With organic systems, the costs of wrong decisions continuously feed back into the development of the system. In organica, those that survive have adapted to the costs of their wrong decisions. In artificia, one can go very far without ever facing the real world. In nature, the “production” environment is consistently identical to the “development” environment. The longer something is confined in a laboratory, the more likely it is to collapse as soon as it faces the real world.

The beauty of the organic approach is that the wrong proposals die quickly and painlessly. When our fear of death for our own ideas is greater than the beauty of death for the wrong ideas, we hide those ideas from the world so that they don’t die young. When the beauty of death for the wrong ideas is greater than the fear of death for our own ideas, we push those ideas out into the world to see if they can survive and grow on their own. One approach is ego-centric and fragile, while the other is selfless and robust.

I’m not saying that after this 1:1 my ego dissolved into the atmosphere and I floated like a feather towards enlightenment. I still have my pet ideas that I push onto my team. Sometimes they’re rejected, sometimes they’re accepted; sometimes for better and sometimes for worse. But the principle still holds: organically grown proposals are more robust than artificially constructed proposals. The same applies to biology (Darwin), politics (see House of Cards), product (MVP), writing (Content Triangle), and business (The Lean Startup). The universality of such a pattern is a signal for me to recognize and practice it more deeply.

Caveats, Tricks, and Tactics

Choosing the right people to show your seedling is an art. People who give constructive feedback are a good start. People with the most context on the problem are an obvious second place to look. Decision-making influence is another important factor. If someone could clearly say no to your proposal for reasons that they are solely responsible for, then it’s worthwhile planting a seed in their garden before growing a tree in yours. They’re the one with the axe.

Focusing on your solution instead of the problem is artificial. There are an infinity of solutions for any given problem (someone has probably proved this somewhere), so it’s very important to identify the solution that solves a number of problems at the same time. This requires having an understanding of all of the various problems, micro-problems, and quality criteria beforehand. In addition, all of this context is constantly changing and the optimal solution is changing with it. If that context is understood, a solution that meets the most important criteria as well as some of the small-but-adds-up criteria can reveal itself, both now and in the future. Focusing a proposal on the problem and all of the context around it and the various solutions has been much more effective for me than focusing on my one solution to a poorly articulated problem.

A proposal can be too organic. Nature is efficient in the long-run but inefficient in the short-term. That’s why we have venture funding and incubators, egg shells and embryos. Some things need a level of protection from the environment to increase their chance of success in the wild. Sometimes asking for permission can kill a good idea on arrival. That said, if I’m on a team where good ideas are killed quickly and surprise is the only way good things can happen, there more serious problems than whether a single proposal succeeds or fails. The long-term costs of a hostile environment are far greater than the short-term reward of strong-arming any proposal.

Conclusion

The organic approach to proposals has greatly improved my effectiveness, efficiency, and relationships. Bad proposals die early and good proposals grow stronger. Feedback and support from the team is built from day one. Surprise and resistance are spread out over time, wasting less energy for everyone involved. Best of all, it’s a process that grows one’s team instead of growing one’s ego. All of these gains add up to a much more pleasant and impactful experience for ourselves, our teams, and our users. I hope you find it so as well.


Processing…
Success! You're on the list.

Software Entropy

Defining Entropy

Entropy is a measure of chaos, or disorder, in a system.

My college physics professor described entropy using two shoe closets.

Imagine a clean shoe closet, where all shoes are paired and sorted by color. The closet’s entropy is the total number of arrangements its shoes can have. A clean closet’s entropy is relatively small. There may be a few pairs of grey or blue shoes that can be switched around – but this doesn’t add much complexity. In a closet with low entropy, it’s easy to add or remove shoes from that closet as needed.

Now imagine a messy shoe closet. None of the shoes are paired, and they’re all tangled in a big pile. How many possible combinations can these shoes be in? You can quickly find out by trying to pull out the pair you want. The messy shoe closet has a much greater entropy than the clean one.

In short, we measure entropy by counting the number of possible states a system can be in. More states mean more entropy.

Entropy in Software

In software, our building blocks are simple enough for us to measure entropy in a crude way. Take this model for example:

Transaction(
  createdAt: String
  buyerId: String,
  sellerId: String
  amount: Int
)

As simple as it seems, this model is like our messy shoe closet. There are many more ways for this model to be wrong than there are for it to be right. We can see that by comparing it to an organized shoe closet:

Transaction(
  createdAt: DateTime,
  buyerId: UserId,
  sellerId: UserId,
  amount: Price
)

When `createdAt` was an arbitrary string, it could take on invalid values “foo” and “bar” just as easily as a valid value “06-23-2020”. There are many more possible states that the field can be in, and most of them are invalid. This choice of a broad data type allows chaos into our model. This unwanted chaos leads to misunderstandings, bugs, and wasted energy.

When each model is strongly typed to a strict set of values, this chaos is minimized. DateTime, UserId, and Price are typed such that all possible values are valid. Accordingly, these types are more predictable, easier to manipulate, and lead to less surprises in practice.

As in life, entropy is not all bad – some of it is desirable and some of it is not. In software, we need entropy to a certain extent: our code is valuable because it supports a variety of possible dates, users, and prices. But when this chaos grows beyond the value it adds, our software becomes painful to use and painful to maintain.

Modeling Software Entropy

Given our observations, we can describe a simple rule:

complexity = number of total possible states

A construct with only a few possible states is simple. Booleans and enums are much simpler than strings. A system with one moving piece is much simpler than a system with many moving pieces.

Sometimes, our problems are essentially complex. In these cases, our solutions need some essential complexity to match. But when does essential complexity become unnecessary? In these cases, we can use another rule:

cleanliness = number of valid possible states / number of total possible states

If there are thousands of total possible states, but only two of them are valid: it’s a messy solution. A simple example of this is representing a boolean value as a string.

if value == "true": do this
else if value == "false": do that
else: throw error

There are many ways for this code to go wrong; not just in execution but also in interpretation. Keeping our solutions clean improves correctness, readability, and maintainability. It’s one of the primary measures of “quality” in my view.

Minimizing Software Entropy

Given these definitions, we can ask ourselves some questions to guide our software decisions:

  1. How many possible states does this solution have?
  2. How many of those states are invalid?
  3. Is there any way to make the solution simpler, by trimming the number of total possible states?
  4. Is there any way to make the solution cleaner, by trimming the number of invalid possible states?

The power of this concept is that it smoothly scales up and down the ladder of abstraction. It applies to basic data types just as well as it does to solution architecture and product development.

How many moving pieces does our solution need? When an unimaginable requirement flies in and tries to blow our solution to the ground, how many pieces can be left standing? When an unexpected input arrives, do invalid states propagate across the system, or are they contained and eliminated on sight? In short, how clean is our solution?

To make life possible, we utilize chaos by creating complex systems that support a diversity of people and their use cases. To make life predictable, we combat undesirable chaos by keeping those systems as clean and orderly as possible.

In software, we work in a world where chaos is measurable and cleanliness is achievable. We just need the right set of signals and responses to make it happen.


Processing…
Success! You're on the list.

Notes on “Productivity” by Sam Altman

Here’s the original article on Sam’s blog.

“Compound growth gets discussed as a financial concept, but it works in careers as well, and it is magic. A small productivity gain, compounded over 50 years, is worth a lot.”

What are your productivity gains?

  • make a list of what you need to do in a day
  • cut out distractions and focus for as long as you can on tasks that need it
  • break tasks down into the smallest doable chunks
  • arrange the doable chunks into a compelling picture for a project

What you work on

“Picking the right thing to work on is the most important element of productivity and usually almost ignored.”

“I make sure to leave enough time in my schedule to think about what to work on.”

“I learned that I can’t be very productive working on things I don’t care about or don’t like.”

“Everyone else is also most productive when they’re doing what they like, so do what you’d want other people to do for you [to maximize their productivity]. Try to figure out who likes (and is good at) doing what, and delegate that way”

When you work with someone, ask them “what do you like to work on?” You can’t answer this question for a lot of people you currently and previously worked with. You only have ideas, that could be very wrong.

You can go even deeper on this. Before you start a project, understand what each team member’s interests are. Then craft the project with the qualities that maximize your team’s interest. Building something is deeper than just “here are the requirements, build them”. The second- and third-order qualities of a product and the team that builds it determine their success in the long run. Rather than just “what product do we want to build?” – “what kind of product do we want to build?” – and rather than “what team do we want to build?” – “what kind of team do we want to build?”. There needs to be a balance between focus on the first-order results and focus on the second-order results. Trying to milk out as much first-order results as you can in the short term leaves you without a healthy team in the end, and then without a healthy product.

What do you do if you and your coworker like to work on the same stuff? Collaborate and share. Sometimes you get the fun thing, sometimes they do. Make it explicit that you’re sharing – don’t make it implicitly competitive. Learn and teach. Pitch joint ventures you can deliver together.

What do you do if you and your coworker like to work on different stuff? That’s easier, split the work that way. One problem may be that then both of you think that the stuff you like to do is the most important stuff, and that is an interesting problem. Ideally, both of you understand that you complement each other and fill in for each other’s blind spots. Without that balance and appreciation, problems arise.

“If you find yourself not liking what you’re doing for a long period of time, seriously consider a major job change.” short-term burnout should be resolvable by some time off, otherwise there’s a deeper problem.

“It’s important to learn that you can learn anything you want, and that you can get better quickly.” This is along the lines of a keystone achievement. Some achievements are breakthroughs in what you believe is possible and open up a whole world of possibilities for life. The thing about keystone achievements you’ve seen is that they’re hard to foresee, they come accidentally as the result of overcoming some first-order difficulty.

“Try to be around smart, productive, happy, and positive people that don’t belittle your ambitions. I love being around people who push me and inspire me to be better. To the degree you’re able to, avoid the opposite kind of people.”

“You have to both pick the right problem and do the work. There aren’t many shortcuts.”

Prioritization

“My system has three pillars: get the important shit done, don’t waste time on stupid shit, and make a lot of lists”

“I make lists of what I want to accomplish each year, each month, and each day.” You tried doing this monthly, and only the daily ended up sticking. But this was a 10x improvement in my productivity. Maybe i should try for the monthly again.

“Lists are very focusing, and they help me with multitasking because I don’t have to keep as much in my head. If I’m not in the mood for some particular task, I can always find something else I’m excited to do.” You find this too – as long as you wrote down the task, you will come back to the list and it won’t get lost. That trust is critical in a complex and distracting environment.

“I try to prioritize in a way that generates momentum. The more I get done, the better I feel, and then the more I get done. I like to start and end each day with something I can really make progress on.” The power of positive feedback loops – start your day with the smallest positive feedback loop you can build.

“I am relentless about getting my most important projects done” Imagine the kind of life this statement of self-identification produces.

“I find the best meetings are scheduled for 15-20 minutes, or 2 hours.” This is great.

“I have different times of the day I try to use for different kinds of work. The first few hours of the morning are definitely my most productive time of the day. I try to do meetings in the afternoon. I take a break or switch tasks whenever I feel my attention starting to fade” – you used to focus best in the evenings (because you had trouble shutting out distractions), now you focus best in the mornings and afternoons. I’m not sure exactly yet.

“I don’t think most people value their time enough – I am surprised by the number of people making $100/hr that will spend a couple hours doing something to save them $20”

“productivity porn – chasing productivity for its own sake isn’t helpful” the diminishing returns of recursion

“Sleep seems to be the most important physical factor in productivity for me.” You’ve learned this the hard way as well. And not just in pure performance, but in other factors like emotional stability and enjoyment of work.

“great mattress makes a huge difference. Not eating a lot before sleep helps. Not drinking alcohol helps a lot.”

“I use a full spectrum LED light most mornings for about 10-15 minutes. If you try nothing else on here, this is the thing I’d try.” recommends this one

“Exercise is probably the second most important physical factor” – you see this too, mostly in second order effects

“Eating lots of sguar is the thing that makes me feel worst [and thus least productive]. I don’t have much willpower with sweets, so I mostly just try to keep junk food out of the house” – same with second order effects, but also pure performance in terms of focus

“Here’s what I like in a workspace: natural light, quiet, knowing that I won’t be interrupted if I don’t want to be, long blocks of time, and being comfortable and relaxed”

“Like most people, I sometimes go through periods of aw eek or two where I have just no motivaction to do anything”

“In general, I think it’s good to overcommit a little bit. I find that I generally get done what I take on, and if I have a little too much to do it makes me more efficient at everything.” You’ve learned this recently. Being efficient is the critical skill – valuing your time and learning to earn multiples money for the same amount of time.

“Finally, to repeat one more time: productivity in the wrong direction isn’t worth anything at all. Think more about what you work on.” Also from the four-hour work-week – “doing the wrong thing perfectly doesn’t make it the right thing”. Some of the best advice on productivity.

Notes on “A Philosophy of Software Design”

What’s the most important general idea in software?

  • Knuth: layers of abstraction
  • Ousterhaut: problem decomposition

Even though it’s the most important idea, there is no course where problem decomposition is the central idea of the course.

Simple rule: Abstractions should be deep

The width of an abstraction is how big the interface is. The size of an interface is how much the user needs to know in order to use it. This includes public functions, side effects, dependencies, and other context like tribal knowledge.

Width should be as small as possible and the depth should be as large as possible.

Depth is the functionality of the class – the value it adds, the complexity that it hides behind it’s interface.

A shallow class has a larger width than depth. It approaches the point where the thing has a higher cost to use than the value it provides.

A deep class has a larger depth than width. It has a clear ROI.

“Abstraction provides a simple way to think about something that’s quite complicated underneath.”

You can evaluate any layer of the system in terms of shallowness and depth – functions, classes, interfaces, libraries, frameworks, protocols, components, systems, products, businesses.

Does your function actually hide any complexity? Or does it expose all the complexity within it?

Does the user have to understand the implementation to use your software? Then it’s useless.

When “it takes more keystrokes to invoke this method than to write the body itself”, it’s shallow.

One exception I can imagine is renaming an expression so as to make it easier to understand what the body is doing. Figure out an expression once, name it, then reuse it freely without having to figure it out again.

Inner platform effect is a symptom of a shallow abstraction.

Applies to writing, documentation, product, and marketing as well

“This is one of the biggest mistakes that people make: too many, too small, too shallow classes”

“Class-itis”: someone heard that classes are good, so they mistook it to mean that more classes are better.

My take: many small classes increase the surface area of the interface, increasing complexity

Classitis is rampant in the Java world

“In managing complexity, the common case matters a lot. We want to make the common case really simple”

Deepest, most beautiful abstraction is the Unix file I/O abstraction

Five deep methods: open, close, read, write, lseek

Good design seems obvious in hindsight. But it’s not as easy as it looks. Bad design is everywhere.

Before Unix, you had two different kernel calls for random vs sequential access. The Unix design was not obvious then.

My take – product examples of deep abstractions:

  1. Google Search (the best imo)
  2. Apple
  3. Netflix
  4. Uber (another one of the best)
  5. Ikea

Product examples of shallow abstractions:

  1. Windows
  2. Jira
  3. Facebook

“Define errors out of existence”

Common approach is to catch and throw as many errors as possible. Better approach is to define semantics such that errors are impossible. You learned this a lot at Kifi.

My take: throwing exceptions increases the width of your abstraction. Especially when they’re undeclared exceptions. These have one unit of width because the abstraction throws the error, another because the user needs to know that the abstraction throws the error on their own.

Idempotency is a great way to eliminate errors: the abstraction enforces a statement to be true. Doesn’t matter if it was already true or not. Context independence means a smaller abstraction.

File deletion outside of Unix (Windows) want possible if the file was open. Unix simplified this by allowing you to delete the file, and it would still be usable just wherever it was open. After it was closed everywhere it would actually be deleted.

Substring in Java is fragile: returns errors if indices are out of bounds. Fixed by returning the overlap between the indices – out of bounds is equivalent to the edge of the string.

“We’re going to try to keep people from making mistakes.” This philosophy is very difficult because there are many more ways to make mistakes than ways to do it right. So your interface expands to cover a breadth of use cases that don’t add value. Rather, make it really really easy to do the right thing and to run the common case.

When is it a good idea to throw exceptions? If you can’t carry out the contract with the user.

What matters and what doesn’t matter? Try to make what matters as little as possible, but no smaller than that.

Exceptions vs error values? Exceptions are most useful when you throw them the farthest. Each layer that an exception is thrown past is a layer of abstraction made simpler by the exception throwing policy. If you’re catching exceptions in methods you call, there’s not much value over a return value. My take: actually at that point it’s counterproductive, because the compiler won’t force the caller to handle the error.

Mindset is critical to good software design. Working code is not enough.

Tactical vs strategic programming. Tactical is short-sighted. Strategic is far-looking.

Mistakes in design add up until the weight of their complexity is too large to clean up.

“Tactical tornado” is an engineer who is very productive at shipping shoddy code that leaves a trail of complexity in their wake

Working code is not enough – you must minimize accidental complexity.

Tactical approach is concave, strategic approach is convex

His hypothesis is that the inflection point is after you forget why you wrote the code – then the accidental complexity is magnified.

Most startups and rushed projects are tactical. Facebook is tactical – Google is strategic.

You can be successful with crappy code. You can also succeed with great code.

Culture of quality attracts the best people. The best people produce the best results. So sacrificing on quality for speed may be first-order positive, but has negative second-order effects that will freeze long-term speed. Complexity outweighs the speed benefits and good people leave.

When writing new code, start with careful design and good documentation

When changing code, find something to improve. Make the abstraction the way it should have been from the start.

Small steps, not heroics. No rewrites, but careful and thoughtful pruning.

Making abstractions just slightly more general purpose can make them deeper and more valuable.

My take: I’ve seen general-purpose simplifications a few times. I also often see general-purpose complications. It’s an artistic move. Seems like it’s useful whenever making the abstraction general purpose separates the context from the function. Sometimes the context can muck up the function. So a general-purpose function plus a context-specific invocation is simpler than both combined in one.

Philosophy on hiring: hire for slope, not y intercept. Hire based on how someone is growing vs where they are today. Why?

Someone who can grow can add way more value than someone who is stagnant that has done the job before. Jobs change, requirements change, great performers adapt and poor performers don’t.

My take: how to measure slope?

  • What is the most recent new thing this person has done?
  • What is the delta between the job requirements and where the person is? Will the job require them to grow and do you estimate that they could grow into it? The larger the delta, the more value a “yes” answer matters as to whether they will add value to the team.

Conversely, as an individual, choose projects that you can grow into as much as possible. Failure to grow into the role means loss of value, but can still be hugely beneficial depending on the costs.

Same applies to many contexts. Set the most ambitious goals that a team can grow into. Create products / education programs / technologies with the steepest learning curve that users can grow into. Write / create content with the steepest engagement curve that the audience can grow into.

“My best hires wer the ones where I really enjoyed our conversation during the interview” I have the same feeling about teams I’ve joined