# 2014's Podcast Recommendations

I listen to podcasts a lot. My gift to you for the new year is a selection of podcasts I really enjoyed in 2014.

My recommendations are all audio-only because I'm often listening during a commute, shower, walk, or as I'm falling asleep. What did I miss?

# Storytelling

## Serial by This American Life

This is obviously the breakout show of the year and it definitely got to me. The post-series interview with Jay definitely keeps the mystery alive.

## This American Life

An essential standard.

## Welcome to Night Vale by Commonplace Books

A slightly creepy news cast about a slightly creepy town. Goodnight, Night Vale. Goodnight.

Great storytelling on a wide range of subjects.

## Reply All by Gimlet Media

Like it says, a show about the Internet. Fascinating stories about how the Internet shapes us and how we shape it.

## StartUp by Gimlet Media

A series (like Serial) of episodes recording the journey of one entrepreneur trying to start up.

## Selected Shorts by PRI

Short fiction read by actors. Makes me feel like the golden age of radio might still have a place.

## TED Radio Hour by NPR

This is an excellent mashup between TED talks and storytelling, often going behind the scenes or deeper into subjects presented at TED events.

# Ideas

## TED Talks Audio

These folks are so awesome they compiled all the episodes available into a Google Spreadsheet. I prefer the audio format for listening on the go, but each episode does suggest "This talk contains powerful visuals."

## Ruby5

Get five minutes of curated news about the Ruby and Ruby on Rails community twice a week (Tuesdays and Fridays).

## The Ruby Rogues by DevChat.TV

I have to be honest: this is hit or miss. In the last few months, though, it's been more hits than misses. I like the regulars on the show and someday I hope to be grilled by them about technical leadership. If I am so lucky…

## The ChangeLog

I like how polyglot this podcast is. Lots of communities get play including, for the first time, Perl just a couple weeks ago. Their [website] also has a nice link blog of interesting open source projects. Keep an eye on it.

# Current Events

## Wait Wait… Don't Tell Me! by NPR

Family favorite every weekend.

## The Rachel Maddow Show on MSNBC

Scroll to the bottom of that page for podcast subscription links. Warning: political alignment obvious.

# Hootsuite Has Best Response to Criticism

This is the gold standard. Customers were unhappy. Hootsuite heard them, were awesomely transparent, and did something about it.

I have a thing for transparency. I think it makes most situations better. It takes courage to be open about shortcomings, even as a team, and to address them in the open. What Hootsuite did here is wonderful, and can be applied to any software team's circumstances.

Maybe you are one of many teams in a large company and you want to be more transparent than the current culture accepts, or maybe like Hootsuite it's your whole company. Either way I encourage you to have the courage to be as transparent as possible and get other people to join in. It's liberating, and it's much less work than the alternative.

With this kind of response Hootsuite shows they're open and accepting of criticism and they'll do something about it. They proved it's safe to give them feedback. Encouraged, even. They now have a reputation for courage and conviction, and for caring more about what they do than protecting themselves. That's one hell of a great reputation to build.

# Integrated Tests Are a Scam

This is an excellent conference talk laying out the core problem with reliance on integrated tests1. I've seen this problem at several companies. Here are the steps to reproduce:

1. Build a complicated system organically.
3. Declare testing a panacea.
4. Build a large, QA-lead integrated test suite at great cost.

Now a huge, brittle test suite exists which provides no direct value to development, where it's needed most, and doesn't address the root problem: poor system design.

## Integrated test hell.

J.B. Rainsberger describes the problem well. If your project has a monolithic, external test suite which relies on the entire architecture to run you are in this special hell right now. The clearest representation I can think of is his explanation of how many tests you'd have to write to get value out of an integrated tests versus isolated tests2.

A software architecture with a few interconnected components (lets say, for example: a database, REST API, UI, Job Queue) requires tests for each function of each component. If we're relying on integrated tests we have to write tests to exercise every function of every component, and every combination of connections between every function of every component. As you write that software you need to write $O(n!)$ integrated tests. You can't write that many tests. You don't have enough time to write enough tests to have confidence in that system. That's scary. That's impossible.

## The cycle.

I love this because it's clear and true. We've all seen this run-on sentence loop:

100% of our integrated tests pass but there's a mistake3 in our software; so we write more integrated tests to fill in the cracks which allows us to design more sloppily and gives us more opportunities for mistakes, and spending time on integrated tests means less time for isolated tests which increases the likelihood that 100% of the tests pass but we still have mistakes.

There is a strong correlation between large numbers of integrated tests and design problems. Integrated tests don't offer any pressure to improve our designs. Isolated tests do. Stop pretending integrated tests are helping you. Write isolated tests.

To quote J.B.:

The real benefit of isolated tests — testing one function at a time — is that those tests put tremendous pressure on our designs. Those tests are the ones that make it most clear where our design problems are. Remember that the whole point of test driven development is not to do testing; it's to learn about the quality of our design. We use the theory that if our design has problems then the tests will be hard to write. The tests will be hard to understand. It'll be difficult to write these small, isolated tests to check one thing at a time.

If you have more isolated tests than integrated tests chances are you have a decent design with clear interfaces and contracts between collaborating systems. This path is cheaper, faster, less likely to allow mistakes, and provides high-bandwith feedback on the quality of your software design. As you write this software you need to write $O(n)$ isolated tests.

You don't have to multiply the code paths in your system to get thorough coverage, you can just add them. You go from a combinatorial explosion of tests-to-code-paths to a linear increase in tests. That's possible.

There's a lot of gold in this talk. Watch it. Twice!

1. Not to be confused with integration tests (referred to in this talk as collaboration tests). Integrated tests require a complete, integrated architecture to run. Integration tests simply test the collaboration between independent components.

2. Isolated tests is a good name for tests operating on a specific function within a software architecture which exercise that function directly, in isolation, independent of any external collaborators in the architecture.

3. Sometimes we call these defects or bugs; I agree with the speaker that's too abstract. It's a human error (more likely a series of human errors). Everywhere else in the world we call those mistakes.

I was talking with someone the other day about my time as (Interim) VP of Engineering at Socialtext. Did I enjoy that? The question was framed like this: some people just like doing things and not dealing with the social aspects of management. But I wonder, are they really much different?

Originally published by me on March 25, 2009 and reprinted here as-is.

Software development is creating, maintaining, and evolving a system. Use whatever action verb you like, you are working with a system. That system can be made better or worse by your actions. If you fix a bug the system is better. Remove a networking bottleneck? Better. Introduce a needless database query on every iteration of a loop? Worse.

Software doesn’t work in isolation. The system is bigger than that. If you increase the memory requirements for your software the servers had better have enough memory to manage it. If you rewrite your code in Python a host of changes are required to make that change possible.

How are teams much different? Leading a team requires the creation, maintenance, and evolution of a system. Again, you can make it better or worse. Help a peer solve a problem with a better tool then your system is better. Reduce needless process? Better. Introduce a needless process on every iteration of development? Worse.

I think both people and technology are irrevocably intertwined. In fact, hacking on one and not the other will cause the performance of both to suffer. This is called Sociotechnical Systems Theory.

# Joint Optimization

A team survives - and eventually thrives - through the joint optimization of their sociological and technological systems. Improving one alone often leads to recessive tendencies in the other. The nature of a team is the symbiotic relationship between its people and technology systems. Success can’t be realized by improving technology alone.

This concept is often hard for everyone. Technologists find it easy to ignore social aspects of an organization. Non-technical specialists are reluctant to consider the artificial reality of technical objects like software. So it can be hard to consider both technical and social aspects of a system.

The delivery of meaningful value to customers requires the actions of both people and technical objects. One can’t improve without the other. Technical achievement is equally as important as social advancement.

## People are (part of) Technical Strategy

Hacking on the social realities in your technology team has strategic value. A healthy team can do more than generate fantastic technological innovations because a healthy team can more accurately assess the environment they’re in. A viable business strategy can’t simply focus on organizational capabilities as most technologists are prone to do. The environment your team operates in isn’t the primary strategic factor as many non-technical specialists see it.

The decision isn’t either/or among organizational capability and environmental reality. The winning strategy is both/and: react to environmental realities within the context of current and improved organizational capabilities.

## The API is Different

The major difference between people and software on a technical team is the API. You’re still debugging, refactoring, creating, evolving, and removing what you don’t need. As a technical team leader you need to talk to both types of interfaces. The API is very different for debugging people vs. debugging software.

If you want to build world class software you have to build a world class team.

This is also why it’s hard for a star programmer to become a star manager. They never spent time learning the People API.

## Footnote

Some of this thinking was done as research for a previous company. I was asked in appropriately vague terms how to fix our software delivery process. The pain was that it took months to get even the smallest changes to the customer. When I searched for the root of the problem it became clear there were two intertwined problems: one technical and the other social.

Half the company was looking for a quick technical fix that would make it all better. The other half wanted to add process to over come the social issues. It was obvious to me we would have to fix both if we really wanted to solve the problem. Any solution that ignored the fact that we were a socio-technical organization was lacking.

# hello, world

As most of us know by now it was Brian Kernighan who wrote the first known hello, world program:

1main( ) {
2        printf("hello, world");
3}


Hello, world!