Click on the chapters listed below to get details on what is covered.
Nota bene: We don't mention it in every chapter description, but each chapter is loaded with examples and exercises.
You code to learn to learn to code with this book!
This chapter provides a brief introduction to the lambda calculus. This chapter was added some time after our initial May 2015 release in response to reader testing that showed later concepts were still a bit confusing. In addition to this introduction explaining the basic semantics, we list optional follow-up reading in order of difficulty for those that want to go further. Do not skip this! It'll make Haskell concepts, such as currying, easier to understand later on.
Your first step in learning Haskell. We start with getting comfortable in the interactive environment so that you can complete the exercises as you work through the book. In this chapter, we will:
In this chapter we take a look at basic expressions that process a type of data called String. In this chapter we will:
We've already looked at expressions involving numbers, characters, and lists of characters, also called strings. This chapter introduces the ways Haskell categorizes and structures data with types. We start here with some of the standard, built-in types, such as Integer and Bool. In this chapter we will:
Haskell has a robust and expressive type system. Types play an important role in the readability, safety, and maintainability of Haskell code. In the last chapter, we looked at some built-in datatypes, such as Bool and tuples. In this chapter, we take a deeper look at the type system. Here we will:
We've mentioned typeclasses in previous chapters, but a solid understanding of them is crucial to understanding Haskell. Typeclasses abstract patterns and provide methods for working with typed data. In the Types chapter, we looked at the way typeclasses interact with type variables and numeric types. This chapter explains some important predefined typeclasses as well as going into more detail about how typeclasses work more generally. In this chapter we:
You might be asking yourself what this chapter is all about: haven't we been talking about functions all along? We have, but now we expand the kinds of functions you can understand and create. This is where you'll learn pattern matching and case expressions in more detail. You'll learn that Haskell functions are first-class entities that can be values in expressions or data structures. You'll learn that they can be passed as arguments to a function. And you'll become comfortable with how functions can make use of syntactic patterns to match on and unpack data constructors. You'll also stretch your legs by writing functions that embody decision procedures or logic. In one sentence, this chapter shows that
Recursion is defining a function in terms of itself via self-referential expressions. It means that the function will continue to call itself and repeat its behavior until some condition is met to return a result. It's an important concept in Haskell that gives us a means of expressing indefinite or incremental computation. In this chapter we:
Lists do double duty in Haskell. Lists serve as a way to refer to and process a collection or plurality of values. But they may also be an infinite series of values, usually generated by a function, which allows them to act as a stream datatype. In many respects, lists are as much a control structure as they are a data structure. In this chapter, we will:
Folding is a concept that extends in usefulness and importance beyond lists, but lists are often how they are introduced. Folds arise from a more general concept that we discuss in the chapter. This chapter is a thorough look at the topic of folding lists in Haskell. In this chapter we will:
We have spent a lot of time exploring datatypes, so you may think we've covered everything already. This chapter explains how to construct your own datatypes. But to understand that, first we need to explain the differences among datatypes more fully and understand what it means when we say datatypes are algebraic.
Haskell offers sum types, product types, product types with record syntax, type aliases, and a special kind of datatype called a newtype that offers a different set of options and constraints from either type synonyms or data declarations. We explain each of these in this chapter and show you how to exploit them for maximum utility and type safety. This chapter will:
Sometimes it's not convenient or possible for every value in a datatype to make sense for your programs. When that happens in Haskell, we use explicit datatypes to signal when our functions received a combination of inputs that don't make sense. Later, we'll see how to defend against those adverse inputs at the time we construct our datatypes, but the Maybe and Either datatypes we will demonstrate here are commonly used in Haskell programs. This chapter includes:
In this chapter, we will be building a small, interactive hangman-style game.
This chapter's primary focus is not so much on code but on how to set up a project in Haskell, use the package manager known as Cabal, and work with (Haskell) modules. You will implement part of the hangman game yourself, but much of the code is already written for you, and we've tried to explain the structure as well as we can at this point in the book. Some of it will remain opaque until we've covered at least monads and IO. But if you finish the chapter feeling like you now know how to set up a project environment and get things running, then this chapter will have accomplished its goal. We expect to add instructions on using Stack to this chapter in the near future. In this chapter, we'll cover:
This chapter, like the one before it, is focused on practical matters rather than Haskell code per se. We will cover two testing libraries here, representing two different but complementary testing methodologies, and how and when to use them. At the end of the chapter, there are a number of exercises that ask you to write your own tests for practice.
Testing is a core part of the working programmer's toolkit, and Haskell is no exception. Well-specified types can enable programmers to avoid many obvious and tedious tests that might otherwise be necessary to maintain in untyped programming languages, but there's still a lot of value to be obtained in executable specifications. To that end, this chapter covers:
One of the finer points of the Haskell community has been its propensity for recognizing and expressing abstract patterns in code. Over the next few chapters, we're going to be looking at some of patterns represented via typeclasses. We begin with one of the simpler patterns, one that doesn't require understanding higher-kinded types. This chapter includes:
In the last chapter on Monoid, we saw what it means to talk about an algebra and turn that into a typeclass. This chapter and the two that follow, on Applicative and Monad, will be on a similar topic. Each of these algebras is more powerful than the last, but the general concept here will remain the same: we abstract out a common pattern, make certain it follows some laws, give it an awesome name, and wonder how we ever lived without it. Monads sort of steal the Haskell spotunderlined text-center, but you can do more with Functor and Applicative than many people realize. Also, understanding Functor and Applicative is important to a deep understanding of Monad.
This chapter is all about Functor, and Functor is all about mapping functions over structure. We saw fmap way back in Lists and noted that it worked just the same as map, but we also said back then that the difference is that you can use fmap with structures that aren't lists. Now we will see what that means. This chapter includes:
In the previous chapters, we've seen two common algebras that are used as typeclasses in Haskell. We come now to Applicative. Applicative is a CIA CENSORED. The Applicative typeclass allows for function application lifted over structure (like Functor). But with Applicative the function we're applying is also embedded in the same structure. In this chapter, we:
Finally we come to one of the most talked about structures in Haskell: the monad. Monads are not, strictly speaking, necessary to Haskell. Monads are powerful and fun, but they do not define Haskell. Rather, monads are defined in terms of Haskell. Monads are applicative functors, but they have something special about them that makes them different from and more powerful than Applicative or Functor alone. In this chapter, we:
We thought you'd like to see Monoid, Functor, Applicative, and Monad in the wild as it were. Consider this a breezy survey of how Haskellers write code when they think no one is looking and a pleasant break from your regularly scheduled exercises.
This typeclass has been appearing in type signatures at least since Chapter 3, but for your purposes in those early chapters, we said you could think of a Foldable thing as a list. As you saw in the chapter on folds, lists are certainly foldable data structures. But lists are not the only foldable data structures, so this chapter will expand on the idea of catamorphisms and generalize it to many datatypes. This chapter covers:
Traversable allows you to transform elements inside the structure like a Functor, producing Applicative effects along the way, and lift those elements of Applicative structure outside of the Traversable structure. Traversable is commonly described as a way to traverse a data structure, mapping a function inside a structure while accumulating the applicative contexts along the way. This is easiest to see, perhaps, through liberal demonstration of examples. In this chapter, we will:
When writing applications, programmers often need to pass around some information that may be needed intermittently or universally throughout an entire application. We don't want to simply pass this information as arguments as it can make the code harder to read and harder to maintain. To address this, we use the Reader Monad. In this chapter, we:
What if I need state? In Haskell we have many means of representing, accessing, and modifying state. We can think of state as data that exists in addition to the inputs and outputs of our functions, data that can potentially change after each function is evaluated. In this chapter, we will:
Parsing originally referred to analyzing a sentence of a natural language and identifying the constituent parts of speech and their relationships. Parsing in computer science is related to this idea: it is a means of breaking up a chunk of data and finding and processing the parts of it you care about. It allows you to accept serialized input in the form of a sequence of characters (textual data) or bytes (raw binary data) and turn that into a value of a structured datatype.
Often when we are parsing things, the structured datatype that results will look something like a tree. In Haskell, we sometimes end up having a tree just because recursive types are so easy to express in Haskell. In this chapter, we:
This chapter and the next are about monad transformers, both the principles behind them and the practicalities of using them. For many programmers, monad transformers are indistinguishable from magick, so we want to approach them from both angles and demonstrate that they are both comprehensible via their types and quite practical in normal programming.
There are many times in "real code" when composing monads is desirable. Different monads allow us to work with different effects. Composing monads allows you to build up computations with multiple effects. By stacking, for example, a Maybe monad with an IO, you can be performing IO actions while also building up computations that have a possibility of failure, handled by the Maybe monad. In this chapter, we will:
The last chapter demonstrated why we need monad transformers and the basic type manipulation that's going on to make that bit o' magick happen. Monad transformers are important in a lot of everyday Haskell code, though, so we want to dive deeper and make sure we have a good understanding of how to use them in practice. Even after you know how to write all the transformer instances, managing stacks of transformers in an application can be tricky. The goal of this chapter is to get comfortable with it in a way that a basic introduction alone could not have accomplished. In this chapter, we:
This chapter concerns the ways in which Haskell programs are evaluated. We aim to give you enough information about Haskell's evaluation strategies that you'll be able to reason confidently about the reduction process of various expressions and programs and introduce stricter evaluation where that is wanted. Specifically, we will:
Data structures are kind of important. Insofar as computers are fast, they aren't getting much faster - the CPU isn't at least. This is usually a lead-in for a parallelism/concurrency sales pitch. But this isn't that book.
This chapter is here to help you choose the optimal data structures for your programs. We can't prescribe one or the other of similar data structures because how effective they are will depend a lot on what you're trying to do. So, our first step will be to give you tools to measure for yourself how different structures will perform in your context. This includes learning how to profile time and memory as well as benchmarking your code. We'll also cover some of the mistakes that can cause your memory usage and evaluation time to explode. This chapter will:
Let's talk about IO.
We've used the IO type at various times throughout the book, with cursory explanations of what it's all about. You no doubt know that we use this type in Haskell as a means of keeping our chocolate separate from our peanut butter. Perhaps you're wondering how it all works, what's underneath that opaque type. To many people, IO seems mysterious.
Most explanations of the IO type in Haskell don't help much, either. They may seem designed to mess with the reader's head rather than convey anything useful. Don't look now, but there's somebody writing an explanation of IO right now that uses van Laarhoven Free Monads and costate comonad coalgebras to explain something much simpler than either topic.
We're not going to do any of that here, but we are going to try to demystify IO a bit. The important thing about IO is that it's a special kind of datatype that CIA CENSORED. You got a taste of this with our short introduction to the ST type in the previous chapter. In this chapter, we will:
Let's face it: in the execution of a program, a lot of things can happen, not all of them expected or desired. In those unhappy times when things have not gone as we wanted them to, we will throw or raise an exception. The term exception refers to the condition that has interrupted the expected execution of the program. Encountering an exception causes an error, or exception, message to appear, informing you that due to some condition you weren't prepared for, the execution of the program has halted in an unfortunate way.
Letting exceptions arise as they will — and the program halt willy-nilly — is suboptimal. Exception handling is a way of dealing with errors and giving the program some alternate means of execution or termination should one arise. This chapter is going to cover both exceptions and what they look like as well as various means of handling them.
In this chapter, we will:
For our final project, we're doing something a little weird, but small and that can be modernized a bit from the original design. Surely no one who knows us from Twitter or IRC will be surprised that we've chosen something eccentric for this.