Welcome to the Rabbit Hole podcast! We’re delighted to welcome you to our first full-length episode.
Our panelists today: David Anderson, Emmanuel Genard, and William Jeffries.
After starting off the episode with a teach-and-learn moment about leaky abstraction, we move onto the core focus of the episode: TDD. TDD, or test-driven development for the uninitiated, is “an evolutionary approach to development which combines test-first development where you write a test before you write just enough production code to fulfill that test and refactoring” (definition from Agile Data).
If you don’t completely understand the concept of TDD from that description, or if you have lots of questions now, don’t worry! That’s what this whole episode is about. We’ll talk about TDD in great detail, addressing the ways in which it’s counterintuitive and can be tricky at first.
More importantly, we discuss how TDD can offer incredible value and make things ultimately easier and simpler, even though it may not feel that way at first. It can ease the demand on your brainpower, and even reveal your own assumptions. Finally, we discuss why TDD may not be relevant in certain environments.
Listen in to hear more about TDD and what it can do for you!
In This Episode:
[00:22] - We have a teach-and-learn moment about leaky abstraction.
[02:28] - Working with someone who has never done any front-end work can be a great experience.
[03:35] - We learn what the definition of TDD is, and why it’s counterintuitive.
[07:33] - How do you decide to use TDD? Or is it something that you use all the time? The consensus is to use it all the time, or close to it.
[11:01] - Part of starting with TDD is understanding the first step. This may mean you need to spend some time “mucking around” before you even know what to test.
[14:16] - One of the times when it’s hardest to do TDD is when you don’t know how to test something.
[15:25] - We hear about the potential value of creating a stash as a reminder, and the possibility of making a spike branch. The guys then joke about whether stashing behavior translates to desk organization.
[18:32] - After a short break, we hear more about an early experience with TDD.
[21:09] - Pairing with someone junior to you offers value that’s easy to underestimate. It makes you stricter on yourself and encourages you to be more concerned with best practices.
[22:07] - We hear about the experience of getting into TDD on your own. We also learn that TDD can reveal your own assumptions.
[24:03] - Developers need to hold a lot of information in their heads while trying to solve a problem. TDD is a great way of preventing things from accumulating in RAM, so to speak.
[26:38] - The guys talk about the pros and cons of remembering the moment you discovered TDD.
[29:41] - We hear which programming languages the panelists prefer to do TDD in.
[32:40] - There’s a bunch of front-end stuff that isn’t testable.
[33:57] - Have the panelists seen the Martin Fowler talk “Is TDD Dead?”
[36:45] - The small applications that college students tend to write for their courses aren’t a realistic representation of what happens when you do this for a living (in terms of the necessity of using TDD).
[39:22] - Google recently released an API called Actions on Google.
Transcript for Episode 02. TDD
Michael Nunez: Hello and welcome to the Rabbit Hole podcast. I'm your host, Michael Nunez. The panelists today are-
David Anderson: David Anderson.
Emmanuel Genard: Emmanuel Genard.
William J: William Jefferies.
Michael Nunez: ... and today we'll be talking about TDD. TDD is Test-Driven Development, and we'll get more into that in just a second. Do we have any teach and learns today that we'd like to discuss?
Emmanuel Genard: I do. I learned ... I kind of heard this term before, leaky abstraction, but I experienced for the first time, I'd say in the last two weeks or so. So an abstraction you use is usually in programming, you kind of abstract away the logic of something, or the steps to something, you just maybe hit any PI to call it, right?
Michael Nunez: Mm-hmm (affirmative).
Emmanuel Genard: Or, you hit, or you use a library, something like underscore in the front end, that does a lot of functional programming stuff for you. So we're using an abstraction to client, and a leaky abstraction is where you have to go under the hood, to figure out how the abstraction works, 'cause you have to fix it, 'cause it don't work the way you want it work. If you're using a third party library to do a job, and you realize that the job you need to do is not doing it right, or it's doing it in ways you don't quite understand, or you have to go in there and change the code inside of that library or override it. That is a leaky abstraction because the code is leaking through the black box it's supposed to be. That is what I've learned and it was interesting to see because it's hard to write code, and the several people who made this library that we're using I'm sure worked really hard, but still it's, there are a bunch of holes in it, that stuff just leaks out and you gotta like, get your buckets ready.
David Anderson: So, how do handle that, do you like overload the class through inheritance, or do you monkey patch it?
Emmanuel Genard: We are going to get rid of that library entirely on this project.
David Anderson: That's how you do it.
William J: Good strategy.
David Anderson: That's how you handle it. Just don't do it.
Emmanuel Genard: In the meantime, what we've done is the work that it was doing, we're using something else to do, right? Either we're managing the cloud ... it's an app that manages the cloud infrastructure for a client. And instead of using this third party library that deals with the cloud, you know AWS or google cloud or wherever. We are just gonna using their CLI's or their API's directly. Right now and then [inaudible 00:02:24] we're going to be using their API's directly.
Speaker 5: As of late, I've been working with someone who has never done any front end work. This person is a scala developer. So I'm pairing with this person and teaching them react 'cause that's what were doing at the client. And its been an awesome experience. Because while the person may have not done any react work. I haven't done much scala work so the fact that we can take a story and go from beginning to end and knock out the requirements for this story has been great. 'Cause at the same time as I'm pairing with this person and kind of showing them how cool react is and how fast things can change depending on the date that it's provided to components. This person is showing me this scala functions and how that data is being managed and passed on to the front end in the first place so to get like a full realm of the entire code base and how it works from beginning to end has been awesome. So it's been both like a learning and a teaching and it's just been awesome. It's been great.
Michael Nunez: Today what we'll be talking about TDD, Test Driven Development. Anyone want to drop a definition as to what it is?
Emmanuel Genard: I will try. Please fill in if I've messed up. So the definition of TDD is when approaching writing an application, you think about what you want say the smallest piece of code to do. You write a test first, to test what the piece of code will do. You'll write the test. You run the test. You watch the test fail. You'll write the piece of code to get the test to green then you'll refactor, as in go over that code again to see if you can make it better. If not, you would then generally just keep going. So it's the red-green refactor cycle. So you write a test, you see it red. Make it green, then you refactor it, right? I'd like to add another part that I think is really kind of usually not the [inaudible 00:04:20] is the thinking. Which is where do you start? How do you write that first test? Right? And what is that smallest piece of test that inches you along? I've myself [inaudible 00:04:32] this problem all the time. Where you spend too much time trying to solve the whole problem. I like having a complete solution in my head. Or I like to think I have a complete solution in my head. So that's a long worded definition of TDD.
David Anderson: Cool. I think that the more counter intuitive part is that the test is in front of the rest of the code. Whereas a lot of places have tests and test coverage, but like actually having the test being prepared up front really shapes the way that you think about writing the code. And like shapes how you think about what's required to write the code. And it helps guide you to a more minimal solution than you might otherwise do.
Emmanuel Genard: That more minimal solution is really helpful. I learn most about TDD from watching "Let's Code: Test-Driven Java script" by James Shore, who wrote the "Art of Agile" book that we give to everyone here at Stride. And he mentioned that letting the design of the application emerge as you go bit by bit instead of thinking about ... and I noticed it today that often when working we don't really let it emerge. We try to figure the whole thing out. Even if it's a small problem. Just letting that problem emerge. And it takes I think a certain amount of mental discipline to, for me anyway, it would take to not try to solve the problem first. Just try to go, Let me just get the functioning to exist. Right?
Speaker 5: Because that's the hardest about programming. Is like naming the function. Like you can write the test and then what am I going to name this guys? Is it descriptive, is the next developer going to hate me? Like, it's always a toss-up. But TDD is always been an evolving thing for me. Just to use it, if I can use it everyday, it's great. I do get caught up in the feature and how do I begin writing this massive thing that I have to test. And then you realize you have to write multiple tests to cover every possible implementation to ensure that your code is covered. But that's like the fun. I think that the thinking part is the coolest part of it. And then you just start writing tests, making it happen.
David Anderson: Yeah. I think one of the great things about that ... think about the entire solution. Like sometimes you have something where you just can't figure out the entire thing. And it can kind of lock you into a gridlock where you keep on thinking what about this, what about this, what about this? And then if you just have a test, you just let yourself go with that. Then like it's kind of freeing 'cause it's a really simple problem. I just need to get this thing from red to green. And then we'll be good.
Michael Nunez: So, what do you guys would use TDD? When do you guys say you know what I'm going to test drive this or is it something that you start on every project that you do?
Emmanuel Genard: All of the time in magical fantasy land.
Speaker 5: With me personally I like to use it all the time. I try to install a testing framework even for personal projects. That I don't complete. And that's probably the reason why they never get completed because they just get caught up in writing the test and then it gets really big. Then i start breaking it down, writing smaller tests, small small small stuff. But I try to use it all the time. I find it really helpful to test drive certain feature and implementation.
David Anderson: Yeah. The thing that I really like about is that it makes you feel more free about just deleting things. And refactoring and if you don't have the test then its always really stressful to do that. Like, oh my god, I have no idea what's going to happen when I change this. But at least the test gives you some kind of assurance that the world will go on after you refactor. But as far as when to use it. Ideally, you shouldn't use it all the time. I think that an interesting correlation is when is it more challenging to use that? I'm like, sometimes you might not be sure how to abstract away the dependencies or if you're testing a new kind of thing you might not be entirely sure what the best way to mock those dependencies are. Like where should you draw the line for what the test is. Is it like a unit test or is it like an end to end test?
Speaker 5: The reason why I like to use it and like to implement it everywhere is because I've worked in code base that were not tested. And then like you have to make this crucial change. But we have no test. Like you're going to make this change and then you're going to figure it out whether you broke something or you broke everything or you broke nothing. The anxiety that I get from looking at a code base that doesn't have any test and I have to make a change to it, I'd rather just deal with setting up a framework and testing everything. Because then that just ... I go to sleep at night knowing, all right everything passed. If its broken that's not my fault. I last saw it green. And then that was -
David Anderson: Right. Yeah. I first learned ... the first projects that I had that I really got deep in were java projects. Like enterprise java. And it was before TDD was really a big thing. And so we didn't have any tests on it. But like, I guess in a form having instead of typing is its own kind of test because it needs to compile. Like if you mess something up really badly, then its not going to even build. So you have that protection for you. You know what goes into a function. You know what comes out of a function. And you have that assurance. But then when I was learning python, and you don't have [inaudible 00:10:25]. I first tried to program without TAD and its more confusing because its dynamic and you don't really know what's going to come in and out. But then when you have the test cases in place then you have more assurance about what's going on and I cant even imagine running python without test now. Or any kind of code really.
Speaker 5: Yeah. It's like crazy to me. I just have to refer to some tests that I can read and learn about the action of limitation. And then it makes me understand that limitation is so much better.
Emmanuel Genard: I found it difficult to ... in certain situations when you are coming in a place where ... Like I came in the project that I'm in, I didn't know anything about the domain. And so part, I think of starting with TDD is kind of understanding the first step. And so I remember feeling really confused about, well I need to muck around here for maybe until lunch or after lunch to figure out what is going on here before I even know what to even test. And the other side of that is that a lot of times that mucking around ends up not being a spike as its normally called but ends up being the code that I just then write test afterwards. That's mostly what happened in the project that I'm in right now. It's mostly just having mucking around. Like this work, this work and all right we should write test definitely before we ship it. And we pretty much always do. But I don't know if its the type of project I'm in, right? Or if I can in fact try to find a way even without knowing ultimately that much about the domain if you can start with the test first and how might to approach that.
David Anderson: Yeah. That's an interesting question. 'Cause when you are learning a new technology and you don't know everything about it then that's the spike solution. You just kind of playing around with it and then its like oh it works. You're supposed to throw those away but its very tempting to keep it.
Emmanuel Genard: It feels so good when it works.
William J: Yeah. My strategy for that is if I'm going to do some sketching and not TDD it then I have a hard rule. That code gets deleted. And then once I understand the domain then I start from scratch and I TDD it out. And I think it's worth it even though it can be painful to throw away a bunch of code because you know that at the end of the day your code is going to be much better tested. Like if you start at red and you make it turn green its going to cover more edge cases and you're going to have a greater degree of certainty that your tests are actually catching the kind of mistakes that you want them to. Like one of the problems with writing tests after the fact is that its much harder to know that the test would fail if the code that you wrote wasn't there. And you can go back into the code and you can comment some lines out and rerun the test but its really easy to do it wrong. [crosstalk 00:13:29]
David Anderson: Yeah. Or have a test that might never fail. I was in a situation recently where there was a problem with the mock library, the version of the mock library we were using. And if you tried to assert a certain mock was called it would just be like, Yeah I was called. I'm good. But it actually was. So there was numerous test cases that actually were failing. But the code worked. But then when you change the code and then the code doesn't work but the test still says it does then that's a problem. So like he said, that's like the test for your test. Like if it never turns red then you've obviously done something very bad.
William J: The time that I find is the hardest to do TDD is when I don't know how to test something. Like I'm working with it, a new library or something that's really asynchronous or something that introduces some kind of complexity that makes it difficult to test. And then I go to TDD and I'm like, I don't really know how to assert this. How do I set up this test at all? And then I have to go in and spike out a solution to understand the problem domain and then delete it. And then usually spend a lot of time googling for blogs that explain how to [crosstalk 00:14:50]
David Anderson: Yeah. But like I guess once you do that kind of thing. Once you write it once and your have the knowledge then I find it always easier to go through it. Then its like muscle memory and you can get on a proper flow. Whereas when you're just like mucking through and building a solution for the first time. It's never flow. Its always just like google. And docks.
William J: And when you delete a bunch of code and then rewrite it. Even if its not because it wasn't tested the first time. The second time you write it, your code is better. Its just better organized. Because you know where you're going.
Speaker 5: I find myself ... we use get at the client. And you do not want to see Mike get stash list. Because what ends up happening is I figure out what I'm trying to do and then rather than deleting it, I stash it just in case in the midst of me trying to test drive everything I miss something, so then I refer to the stash to see what was it that I was testing or what was it that was happening. Then I just like, okay that got a little bit of information from before because I would hate to write the code and then delete it and then be like, man what did I write again? So I stash it then ... often times I don't ever pop it back into my code base. So I end up which is like a huge stash list that I don't ever clean. And I don't remember what it was. So, at one point I'm like I got ten entries. Stash clean. And its just right out. It just piles up because those are all the times that I spike something and then I know what to test now so I'm going to stash it, start from the beginning. And do it over.
Emmanuel Genard: That is a really good way of getting rid of stuff you spiked without having to ... what I usually do is make a spike branch except I never deleted the spike branch.
Speaker 5: It just exists.
Emmanuel Genard: It just exists.
David Anderson: It's always there.
Emmanuel Genard: Yeah. But that's a good way. Just stash it ... you can just refer to the stash. It feels less friction. There's less friction with that.
David Anderson: Mm-hmm (affirmative). I like that. So I've used stash before just pop, change this off and put them back. But can you do a diff on a stash between [crosstalk 00:17:12] check out?
Speaker 5: I think you can do a stash diff. Or you can even look at the hash from the stash and get diff with head and what you have. And there you can see the difference.
William J: Yeah. If you do a get stash list then it gives you the [inaudible 00:17:30].
Speaker 5: And it shows you all the stashes you've done, that you've made in a stack. And then the one at the very top is the one that you would pop into your code base. So you can see it. And its like stash number 0, stash number 1, stash number 2. And then you can have up to 10 or 15, I don't remember. But I always have the maximum amount of stash. Because I'm always stashing code after a spike. And you know what, I time box myself 25 minutes, I'm going to test drive this entire thing and that's it.
David Anderson: I'm trying to imagine what your desk looks like. If your desk is like your stash or if its just like perfectly clean.
Speaker 5: Its just like the stash. It's a mess.
David Anderson: That's right. There's all that Jiffy peanut butter.
Speaker 5: Yeah yeah. I have all sorts of peanut butter and oatmeal. And what is index cards and sharpies and everything. Pass by sometime. Get some food.
David Anderson: IRL stash.
Michael Nunez: We're gonna take a quick break. After that we're going to talk about our first experiences testing. And what we do with different languages. Don't go away. And we're back. Before the break I was talking about our first time doing TDD and I have a interesting experience as to me doing TDD. I believe it was a rails shop I was working with, pairing with someone in TDD. And we had to do this feature and it was kind of confusing for me to understand like wait ... test first? ... no there's no time for that. We have to implement these things and push them out. Before that shop, I use to work at a finance shop and time is of the essence. You have to fix, do things now. Now. Because time is money and you have to get things done. So testing was brand new to me and we have to implement this feature. I can't remember exactly what we were implementing but if you could imagine, imagine a function that return 42. So the person that I'm pairing with is, okay we're gonna write a test and on this test is gonna return the number 42 as we expect when you give the number 1 or whatever. So I'm writing the test trying to figure out how to write this test and I manage to get this test done and my pair then, I don't know if it was a joke or, to me it was a joke. But in the implementation he literally returned the number 42 and made the test pass. Why are we doing that? Like, why are you playing games? Like I wrote this test. I put number 1 and you just have it return 42. What are you talking about? But then as we were building out the test for the requirements as we were passing it in, like if you were passing 2 then it has to return another number. And then that where it got a little more complex and things happen the more we wrote test according to the requirements. That's when the function actually became exactly what we expected it to be. So I just thought its funny. I call it like wise guy driven development. 'Cause its like I wrote this test and you got find the way to cleverly make the test pass. Like, oh, boom 42. What do you got for me? That was like my ... so hard to comprehend that. But then as we were writing the test and we were getting all the requirements in test, that's when I realized this particular function is fully tested and we have full confidence that it's going to do exactly what we expect it to do.
David Anderson: That's kind of getting thrown into the TDD deep end. [crosstalk 00:21:05]
Speaker 5: Wise guy driven development.
William J: I remember the first time that I paired with somebody who is more junior than me. And one of the things that was beneficial about it ... you know, I think ... this is a bit of a tangent, but I think that people underestimate the value of pairing with a junior. What's really nice about doing TDD with a junior is that it makes me much more concerned about practices, like I'm way stricter with myself about actually deleting [inaudible 00:21:37] spike it out and making sure that every test that I write starts red and then goes green. And making sure that I handle all the edge cases and I'm super wise guy about driving out every scenario. And the reason is because I have this ... I have somebody who I feel a responsibility for helping to learn and grow and develop as a developer. And so, I think particularly for TDD that's been a really positive experience.
Emmanuel Genard: That's really interesting. I learned TDD on my own. It's so strange because I kind of got into TDD probably because I ... when I first got interested in programming and I used to just scour the internet and just find stuff to read and listen and talks. And I don't remember where or when I first heard about TDD. It was probably some post somewhere. Maybe a video. The image that comes to mind right now is probably a video by Sandy Metz. But she doesn't really talk about TDD that much. I don't know why that's the first one that pops into my mind. The idea just made too much sense to me. That you need to know what you're doing before you do it. That's how I see it. Or you need a way that is somewhat outside of myself to verify that I did what I think I did. And I like to think of debugging as like finding the truth between what you think you did and what you actually did. Right? So it's not a bug. Its really a mistake that I made in either thinking or typing. Mostly typing, right? So TDD helps kind of verify my assumptions and that sort of idea is something that appeals to me to verify them as much as I can. And TDD allows to do that. Also to figure out what your assumptions are because writing the test reveals what you think this thing ought to do. It reveals what you think you want to do. And when you get down to the specific thing that this thing is suppose to return when I call this function its suppose to be true. you kind of get down to, for me anyway, like I used to write a lot when I was younger. I used to write plays and short scenes and things like that. And so writing things down is how I think. And so TDD kind of helps me think.
William J: I think that when we as developers are trying to solve a problem we have to have a whole lot of information in our heads. If you can imagine humans as having RAM, Rapid Access Memory where you have everything loaded in memory and you can get to it really quickly but you have a very limited capacity for that. TDD is a great way of preventing things from accumulating in RAM. As soon as you come up with a requirement for the app, you write down the requirement. Now its not in your head. All you have to do is run the specs and the requirement is tested. And then all you have to do is go and implement that one feature, that one return value, that one portion of whatever it is you're testing. And that's out of your head and you're back to full RAM. Whereas when you are trying to spike something out. You have to keep all of the functionality and all of the specifications and all of the behavior that you've coded thus far in your head. And so I think that's part of why I really enjoy TDD so much. It's because it keeps my head empty so that I can solve hard problems with 100% of my RAM.
David Anderson: So for me, when I was learning TDD it was similar to Emmanuel where I was mainly learning by myself but I had a community of people that I was working with at the time. And we were like learning different programming concepts. I kind of put a line out there like, "Hey, does anyone know TDD? Who can teach me?" And I got basically back a lot of responses like I really want to learn more about TDD and like a lot of people were interested in more so we kind of like gather together as a group and laid our assumptions about what testing actually meant and why it was important. And we went and just doing some exercises. And as soon as I realized a lot of the things we were talking about ... the ability and freedom to think about things. And refactor without fear. I was like, it was like a super power. Like I feel so empowered by this. This is amazing. I think a lot of our initial assumptions about testing and what was there for were not entirely true. Like because the real reason of why its there is what we're talking about now. Like facilitating your means of thinking and having the verification is just a bonus on top of that, I think. The main thing is just being able to work efficiently and think about the problem in a different way.
William J: I'm kind of jealous everybody remembers when the moment they discovered TDD. I feel kind of inadequate. I think I first heard about it through some online class. And then I got exposed to it slowly overtime and I experimented with it over time. I don't really remember that turning point. I think it's just probably taking a long time for me to -
David Anderson: Just like a leaky abstraction or slowly leaking into your life.
Speaker 5: Just like that. I like to think that you had it easy because you just knew that there was a world where it was just about TDD. Where I had worked before that there were no testing allowed pretty much. You implement something and then you trip up like the regression test that takes 3 hours. They use real data and process all this real data. And everything just starts failing. Like, what was it about the changes that I've made that broke all these tests? So you have to go and figure it out. Then run it again. And then its just like so time consuming. With like unit tests, you can just run 15 seconds most and you know whether you broke something horribly or not. Which is like great.
David Anderson: That's an interesting distinction that I don't think we've talked about that really like the difference between the unit test and an end to end test. Like the regression test. I think both have their place in TDD, right? I guess mostly you think about unit test when you're thinking about TDD because that's the quick response thing.
William J: That's the most fun part of TDD is when you get down to the unit test. I always start at the feature level and that's always the hardest. Although it probably adds the most value in terms of thinking through your problem domain, particularly if you're doing TDD. Because you have to go through and write down in English exactly what it is that this feature is supposed to do and why. And that I find really helpful particularly when you're trying to come up with the vocabulary to describe it in the first place. I think using the naming convention from the actual domain that your app works in is a really powerful thing. And it's really tempting when you're in the weeds at the unit level to just say, "Well, you know, it's ... info".
David Anderson: Get data. Update data.
William J: People want to name everything data. [crosstalk 00:29:10]
David Anderson: You always want to get it too right? Just get that. [crosstalk 00:29:13] Set the data. Get the data.
Speaker 5: Close up shop, guys. [crosstalk 00:29:21]
William J: Anyway, yeah. So if you take the time to write everything ahead of time it forces you to identifying exactly what is that data? Oh, I see. This data is actually in order, or this data actually represents a product.
David Anderson: Mm-hmm (affirmative)
Michael Nunez: Any different programming languages you prefer doing TDD in?
Speaker 5: I personally doing it in ruby. Ruby is very clear. I find myself always going to the debugger for react. We use jasmine, karma framework at the client right now. And there's always something tripping up. Like, oh man, oops. Gotta write this test. Go to the debugger. And figure out what HTML key or function app to call to get exactly what it is that I want to test. But ruby is just like, oh yeah this function should do this. And then you call that function and then Boom. It just works. Its great.
William J: I think part of it is that the ruby community is really into TDD and really into testing in general. And so there is a lot of great libraries out there. And the libraries are strong. And they usually have a lot of agreement about which library you should use. So like in the java script community there are a zillion different matching libraries. In ruby it's like everybody just uses our spec. And our spec just has all of the matchers.
Speaker 5: Yup.
David Anderson: There's a couple of different ones with ruby though, right?
Speaker 5: There's ruby's mini test. I think mini test is another one. I don't know how I know that. 'Cause I have only worked at places that uses our spec. I know mini test is one of the one ... I think it comes with ruby when you install using RPM or something like that. But our spec, everyone uses our spec. That just like the thing to use.
William J: Yeah. And its also really full featured. Like in java script, you need a test runner and then you need the actual testing library and then you need the matchers to go on top of it. And they are all different libraries maintained by different people. And they don't always play nicely together. Whereas with ruby its usually like, here's this one library that just does all of the things.
David Anderson: Right. Because people considered it from the beginning as something important. Whereas with java script its front end and I guess that's another thing also right? Like with ruby you're often writing back end code. And you can be a little more functional about it maybe than when you're writing front end code, where it gets a little bit more messy 'cause there's HTML elements and user interaction and things like that. Which reacts makes a lot easier to reason about when you're writing tests, right?
Speaker 5: I mean they have these little tricks to ensure, like oh, you called this unclick function then you expect this thing. But I'm always having to brush up ... what is it that I have to check? What happened ... it's not like in the component level. It'll be in the HTML level. I have to check the text content of a particular component. And its like I have to go through it millions of times until I get it in my head. That's what you have to do. But it's fun, like oh, what is it again? You search and then its great. I mean it's not like it's difficult 'cause I can always find the answer in the debugger which is great. But with ruby it ... I'd know exactly what to do because I've been testing longer in ruby than I have in react.
William J: There's also a bunch of front end stuff that is just totally untestable. Like test the nav bar is actually visible. There are like 30,000 ways that that nav bar might not be visible. Good luck.
David Anderson: Welcome to front end. Testing.
Emmanuel Genard: You know there is James Shore. The person. He has written a library or some way to test CSS actually. He's been working on it ... you can test drive CSS. I'm just throwing it out there randomly. I've never seen it. I would never use it. I've just heard about it. He's been writing it for almost a year now.
Speaker 5: You guys can't see it but I'm pointing at my eyeballs. And that is what I use to test CSS when I look at the screen and the colors work. That is really difficult. I mean, like in react you can definitely check the class name of a particular component, if necessary. But like CSS testing, man I'm looking forward to that. That sounds really interesting.
William J: Yeah, what is the name of this thing? I'm definitely looking this up.
Emmanuel Genard: [inaudible 00:33:49] like Don Quixote but -
Speaker 5: Nice.
William J: That's a highly appropriate name.
Emmanuel Genard: Yes.
Michael Nunez: We're about closing up about the conversation about TDD. I just wanted to ask everyone here if you guys ever seen the Martin Fowler talk on "Is TDD dead?" with Kent Beck and DHH. Its like a really interesting conversation where I believe ... I haven't seen it in a really long time, but I believe that DHH does not believe in testing.
Emmanuel Genard: I don't think he doesn't believe in testing. He doesn't believe in TDD anymore because he used to be ... I've heard interviews about him that he drank the kool-aid on a lot of the XB stuff when it first came out. And when he was writing a rails his role models were people like Martin Fowler and Kent Beck and those people. I don't remember what the argument was that he made for TDD dead. I think it might be that writing test first might be dead or something like that.
Michael Nunez: I think that was what it was actually.
William J: I think he's trolling you man. I really think -
Michael Nunez: I don't know. The video is out and I'm sure there's going to be a lot of listeners that probably have seen it and whatnot and if you haven't you should check it out. It's great. He could very well be trolling all of us right now. I can't live without TDD so its very much alive for me.
William J: That video has been on my watch it later list on youtube for about three years.
Michael Nunez: Put it on your watch it now.
William J: I remember it was 2012 or 2013. He came out with a blog post called, "TDD is dead, long live TDD." And it blew up and everyone was like, oh my god he thinks TDD is dead. But he's actually ... its a reference to "the king is dead, long live the king." It's like a ... he actually does believe that, at least at the time, I don't know. I haven't talked to him personally. But at least at the time, based off of that blog post, he was trying to be controversial which is a thing he is good at doing.
Michael Nunez: Go check it out.
Emmanuel Genard: Also that set phrase about long live the king, the king is dead. It happens when a monarch dies and the next one gets crowned. So its about an era ending and a new one rising.
William J: But there's still a king.
Emmanuel Genard: There's still a king. So there's still TDD. Its just maybe a way of doing TDD or thinking about it. You know has died.
David Anderson: Right. Yeah. I do remember seeing a blog post that was highlighting a study. I cannot remember the specific metrics that they were actually using to back their claim but they were like backing their claim with code metrics. Like, see we still got the work done. Like even though we didn't do TDD. But then you look at the comments section of that article and people have opinions. They're like, but how beautiful is the code. You didn't use TDD. How maintainable is it? All these nonfunctional requirements that are really hard to measure.
Emmanuel Genard: Also I think I saw that article in the comments section. And they usually use college students with writing fairly small applications that they can ... they have til the end of the semester, whatever the time they have to do their testing. And so it becomes ... its not a realistic representation of what happens when you do this for a living because I'm working on application that's fairly ... its not tiny but its not a huge ... I'm sure most of you are working on stuff that's bigger. But it still has a hundred different models, right? And we have almost 5,000 tests on it. Now if those test didn't exist. Try and make a change in that application would be just impossible. Or it would just take forever. Not because the implementation of the thing would take a long time, because it would be fixing everything that would broke afterwards. And finding stuff that you broke like 3 months ago, breaking today that you have to go ... finding code you wrote 3 months ago, breaking stuff today, right? That's just -
David Anderson: But that's the different between writing the test after or writing it before and does that drive you to have better organized code or have modules that aren't 3,000 lines long or any other more squishy metric.
Speaker 5: That's just stress I don't want to deal with. Like I rather just have the test stress for me. And if that something fails that implement something, might as well just take care of it.
Emmanuel Genard: To go on the testing before and testing after, if you think about what happens when you when you test it before is you're able to be more concrete as we talked about earlier about the ideas that you're trying to actually test for. When you test after, you're confirming something you already wrote, right? And that's a very different kind of approach and the kind of things you're about to come up against or the kind of edge case you run into ... for instance I think when you test after, the edge cases come to you after, right? There is couple times in the project I'm on where testing after has lead to unforeseen bugs that test first might have caught ... not guaranteeing the caught. Just more likely to have caught.
Michael Nunez: Awesome. That was a great conversation everyone. We hope you guys check out the "Is TDD Dead?" and enjoy it. Its a great interview if I recall correctly. I might have to re watch it again. Do we have any picks you want to talk about before we end the podcast?
David Anderson: So, I have one thing. Recently google released an API called "Actions for Google," which is an API for the google assistant, which is used by Google Home and also the new pixel phones and all that fun stuff. And so I was looking at these tools and its really interesting because some of the things that you used to build these interactions, these conversations. Even those things ... obviously those interactions are machine learning based. It's figuring out what word you said, like text to speech. And then its figuring out what response to do. But even the tools that you use to build these interactions are using ML as well. You give it examples of parameters that you want. Like different kinds of foods and then you give it examples of different kinds of questions that you want to ask it. And it would automatically figure out what things are the parameters and what you want in there. And its like just stacks of ML. And it's pretty exciting because there's these nitty gritty problem solved now people can build on top. And make something really cooler.
Speaker 5: Nice. I'm looking forward to those apps. One thing I have is I think I mentioned before at the beginning of the podcast about pairing with someone who is unfamiliar to react. And it's also my pick because I'm also looking forward to working in scala. There's something very different about pairing with someone and you guys ... you and the other individual know more or less, like how to get the feature done. But when you almost depend on someone, so that person ends up teaching you how to do it yourself. I feel very empowering that we're doing that team that I'm in. So I'm really looking forward to learning scala and producing awesome code in both scala and react. And being able to teach someone. That is awesome.
Michael Nunez: Ladies and gentlemen. Thank you for listening to the Rabbit Hole. I'd like to thank the panelists here today and we'll see you next time.
Links and Resources:
Our seasoned cross-functional agilists work with you to develop technology, ship product and deliver value. We leverage our diverse expertise across industries and throughout the organizational growth cycle to your benefit. We bring best practices, emergent practices and creative solutions to your problems.