What's in the book?
The book is now complete and has had a final digital release!
Click on the chapters listed below to get details on what is covered.
What's covered in each chapter?
Nota bene: We don't mention it in every chapter description, but each chapter is loaded with examples and exercises.
You code to learn to learn to code with this book!
This chapter provides a brief introduction to the lambda calculus. This chapter was added some time after our initial May 2015 release in response to reader testing that showed later concepts were still a bit confusing. In addition to this introduction explaining the basic semantics, we list optional follow-up reading in order of difficulty for those that want to go further. Do not skip this! It'll make Haskell concepts, such as currying, easier to understand later on.
Your first step in learning Haskell. We start with getting comfortable in the interactive environment so that you can complete the exercises as you work through the book. In this chapter, we will:
- use Haskell code in the interactive environment and also from source files;
- understand the building blocks of Haskell: expressions and functions;
- learn some features of Haskell syntax and conventions of good Haskell style;
- modify simple functions.
Strings and Printing
In this chapter we take a look at basic expressions that process a type of data called String. In this chapter we will:
- take an introductory look at types to understand the data structure called String;
- talk about the special syntax, or syntactic sugar, used for strings;
- print strings in the REPL environment;
- work with some simple functions that operate on this datatype.
We've already looked at expressions involving numbers, characters, and lists of characters, also called strings. This chapter introduces the ways Haskell categorizes and structures data with types. We start here with some of the standard, built-in types, such as Integer and Bool. In this chapter we will:
- review types we have seen in previous chapters;
- learn about datatypes, type constructors, and data constructors;
- work with predefined datatypes;
- learn more about type signatures and a bit about typeclasses.
Haskell has a robust and expressive type system. Types play an important role in the readability, safety, and maintainability of Haskell code. In the last chapter, we looked at some built-in datatypes, such as Bool and tuples. In this chapter, we take a deeper look at the type system. Here we will:
- talk more about querying and reading type signatures;
- demonstrate that currying has, unfortunately, nothing to do with food;
- take a closer look at different kinds of polymorphism;
- look at type inference and how to declare types for our functions.
We've mentioned typeclasses in previous chapters, but a solid understanding of them is crucial to understanding Haskell. Typeclasses abstract patterns and provide methods for working with typed data. In the Types chapter, we looked at the way typeclasses interact with type variables and numeric types. This chapter explains some important predefined typeclasses as well as going into more detail about how typeclasses work more generally. In this chapter we:
- examine the typeclasses Eq, Num, Ord, Enum, and Show;
- learn about type-defaulting typeclasses and typeclass inheritance;
- look at some common but often implicit functions that create side effects.
You might be asking yourself what this chapter is all about: haven't we been talking about functions all along? We have, but now we expand the kinds of functions you can understand and create. This is where you'll learn pattern matching and case expressions in more detail. You'll learn that Haskell functions are first-class entities that can be values in expressions or data structures. You'll learn that they can be passed as arguments to a function. And you'll become comfortable with how functions can make use of syntactic patterns to match on and unpack data constructors. You'll also stretch your legs by writing functions that embody decision procedures or logic. In one sentence, this chapter shows that
- Haskell functions are first-class entities that
- can be values in expressions, lists, or tuples;
- can be passed as parameters to a function;
- can be returned from a function as a result;
- make use of syntactic patterns.
Recursion is defining a function in terms of itself via self-referential expressions. It means that the function will continue to call itself and repeat its behavior until some condition is met to return a result. It's an important concept in Haskell that gives us a means of expressing indefinite or incremental computation. In this chapter we:
- explore what recursion is and how recursive functions evaluate;
- go step-by-step through the process of writing recursive functions;
- have fun with bottom.
Lists do double duty in Haskell. Lists serve as a way to refer to and process a collection or plurality of values. But they may also be an infinite series of values, usually generated by a function, which allows them to act as a stream datatype. In many respects, lists are as much a control structure as they are a data structure. In this chapter, we will:
- explain list's datatype and how to pattern match on lists;
- practice standard library functions for operating on lists;
- learn about the underlying representations of lists;
- see what that representation means for their evaluation;
- and do a whole bunch of exercises!
Folding is a concept that extends in usefulness and importance beyond lists, but lists are often how they are introduced. Folds arise from a more general concept that we discuss in the chapter. This chapter is a thorough look at the topic of folding lists in Haskell. In this chapter we will:
- explain what folds are and how they work;
- describe the evaluation processes of folds in great detail;
- walk through the process of writing folding functions;
- introduce scans, functions that are related to folds.
We have spent a lot of time exploring datatypes, so you may think we've covered everything already. This chapter explains how to construct your own datatypes. But to understand that, first we need to explain the differences among datatypes more fully and understand what it means when we say datatypes are algebraic.
Haskell offers sum types, product types, product types with record syntax, type aliases, and a special kind of datatype called a newtype that offers a different set of options and constraints from either type synonyms or data declarations. We explain each of these in this chapter and show you how to exploit them for maximum utility and type safety. This chapter will:
- explain the "algebra" of algebraic datatypes;
- analyze the construction of data constructors;
- spell out when and how to write your own datatypes;
- clarify usage of type synonyms and newtype;
- introduce kinds.
Sometimes it's not convenient or possible for every value in a datatype to make sense for your programs. When that happens in Haskell, we use explicit datatypes to signal when our functions received a combination of inputs that don't make sense. Later, we'll see how to defend against those adverse inputs at the time we construct our datatypes, but the Maybe and Either datatypes we will demonstrate here are commonly used in Haskell programs. This chapter includes:
- Nothing, or Just Maybe
- Either left or right, but not both
- anamorphisms, but not animorphs.
Modules and Projects
In this chapter, we will be building a small, interactive hangman-style game.
This chapter's primary focus is not so much on code but on how to set up a project in Haskell, use the package manager known as Cabal, and work with (Haskell) modules. You will implement part of the hangman game yourself, but much of the code is already written for you, and we've tried to explain the structure as well as we can at this point in the book. Some of it will remain opaque until we've covered at least monads and IO. But if you finish the chapter feeling like you now know how to set up a project environment and get things running, then this chapter will have accomplished its goal. We expect to add instructions on using Stack to this chapter in the near future. In this chapter, we'll cover:
- building Haskell programs with modules;
- using the Cabal package manager;
- conventions around Haskell project organization;
- building a small interactive game.
This chapter, like the one before it, is focused on practical matters rather than Haskell code per se. We will cover two testing libraries here, representing two different but complementary testing methodologies, and how and when to use them. At the end of the chapter, there are a number of exercises that ask you to write your own tests for practice.
Testing is a core part of the working programmer's toolkit, and Haskell is no exception. Well-specified types can enable programmers to avoid many obvious and tedious tests that might otherwise be necessary to maintain in untyped programming languages, but there's still a lot of value to be obtained in executable specifications. To that end, this chapter covers:
- the whats and whys of unit/spec testing and property testing;
- using the testing libraries Hspec and QuickCheck;
- a bit of fun with Morse code.
Monoid and Semigroup
One of the finer points of the Haskell community has been its propensity for recognizing and expressing abstract patterns in code. Over the next few chapters, we're going to be looking at some of patterns represented via typeclasses. We begin with one of the simpler patterns, one that doesn't require understanding higher-kinded types. This chapter includes:
- Learning how to use QuickCheck to get a high-probability read on the validity of your monoid and semigroup instances;
- Lots of practice implementing Monoid and Semigroup instances so that the abstract becomes more concrete to you.
In the last chapter on Monoid, we saw what it means to talk about an algebra and turn that into a typeclass. This chapter and the two that follow, on Applicative and Monad, will be on a similar topic. Each of these algebras is more powerful than the last, but the general concept here will remain the same: we abstract out a common pattern, make certain it follows some laws, give it an awesome name, and wonder how we ever lived without it. Monads sort of steal the Haskell spotunderlined text-center, but you can do more with Functor and Applicative than many people realize. Also, understanding Functor and Applicative is important to a deep understanding of Monad.
This chapter is all about Functor, and Functor is all about mapping functions over structure. We saw fmap way back in Lists and noted that it worked just the same as map, but we also said back then that the difference is that you can use fmap with structures that aren't lists. Now we will see what that means. This chapter includes:
- the return of the higher-kinded types;
- fmaps galore, and not just on lists;
- digressions about dusty logicians;
- words about typeclasses and constructor classes;
- some func-y puns, probably;
- more QuickChecking!
In the previous chapters, we've seen two common algebras that are used as typeclasses in Haskell. We come now to Applicative. Applicative is a CIA CENSORED. The Applicative typeclass allows for function application lifted over structure (like Functor). But with Applicative the function we're applying is also embedded in the same structure. In this chapter, we:
- define and explore the Applicative typeclass and its core operations;
- make the usual chitchat about laws and instances;
- do a lot of lifting;
- still quick-checking;
- give you some Validation.
Finally we come to one of the most talked about structures in Haskell: the monad. Monads are not, strictly speaking, necessary to Haskell. Monads are powerful and fun, but they do not define Haskell. Rather, monads are defined in terms of Haskell. Monads are applicative functors, but they have something special about them that makes them different from and more powerful than Applicative or Functor alone. In this chapter, we:
- define Monad, its operations and laws;
- look at several examples of monads in practice;
- write the Monad instances for various types;
- nuke some misinformation about monads from orbit;
- are not done quick-checking;
- show you how disappointingly easy and unmagical Monads can be.
Abstract structure applied
We thought you'd like to see Monoid, Functor, Applicative, and Monad in the wild as it were. Consider this a breezy survey of how Haskellers write code when they think no one is looking and a pleasant break from your regularly scheduled exercises.
This typeclass has been appearing in type signatures at least since Chapter 3, but for your purposes in those early chapters, we said you could think of a Foldable thing as a list. As you saw in the chapter on folds, lists are certainly foldable data structures. But lists are not the only foldable data structures, so this chapter will expand on the idea of catamorphisms and generalize it to many datatypes. This chapter covers:
- the Foldable class and its core operations;
- the monoidal nature of folding;
- standard operations derived from folding.
Traversable allows you to transform elements inside the structure like a Functor, producing Applicative effects along the way, and lift those elements of Applicative structure outside of the Traversable structure. Traversable is commonly described as a way to traverse a data structure, mapping a function inside a structure while accumulating the applicative contexts along the way. This is easiest to see, perhaps, through liberal demonstration of examples. In this chapter, we will:
- explain the Traversable typeclass and its canonical functions;
- explore examples of Traversable in practical use;
- tidy up some code using this typeclass, including with the wreq http client;
- and, of course, write some Traversable instances.
When writing applications, programmers often need to pass around some information that may be needed intermittently or universally throughout an entire application. We don't want to simply pass this information as arguments as it can make the code harder to read and harder to maintain. To address this, we use the Reader Monad. In this chapter, we:
- examine the Functor, Applicative, and Monad instances for functions;
- learn about the Reader newtype;
- see some examples of using Reader.
What if I need state? In Haskell we have many means of representing, accessing, and modifying state. We can think of state as data that exists in addition to the inputs and outputs of our functions, data that can potentially change after each function is evaluated. In this chapter, we will:
- talk about what state means;
- explore some ways of handling state in Haskell;
- generate some random numbers;
- and examine the State newtype and Monad instance.
Parsing originally referred to analyzing a sentence of a natural language and identifying the constituent parts of speech and their relationships. Parsing in computer science is related to this idea: it is a means of breaking up a chunk of data and finding and processing the parts of it you care about. It allows you to accept serialized input in the form of a sequence of characters (textual data) or bytes (raw binary data) and turn that into a value of a structured datatype.
Often when we are parsing things, the structured datatype that results will look something like a tree. In Haskell, we sometimes end up having a tree just because recursive types are so easy to express in Haskell. In this chapter, we:
- get a tutorial in the Trifecta parsing library to demonstrate the basics of parsing;
- demonstrate the awesome power of parser combinators;
- demonstrate how to abstract over and swap out different parsing backends using the parsers API;
- demonstrate how to run the same "parser" with Trifecta for nice errors and attoparsec for speed without changing our code;
- marshall and unmarshall some JSON data with a popular JSON library;
- talk about tokenization and how to not let it ruin your day.
This chapter and the next are about monad transformers, both the principles behind them and the practicalities of using them. For many programmers, monad transformers are indistinguishable from magick, so we want to approach them from both angles and demonstrate that they are both comprehensible via their types and quite practical in normal programming.
There are many times in "real code" when composing monads is desirable. Different monads allow us to work with different effects. Composing monads allows you to build up computations with multiple effects. By stacking, for example, a Maybe monad with an IO, you can be performing IO actions while also building up computations that have a possibility of failure, handled by the Maybe monad. In this chapter, we will:
- demonstrate why composing two monads does not give you another monad;
- examine the Identity and Compose types;
- manipulate types until we can make monads compose;
- meet some common monad transformers;
- work through an Identity crisis.
The last chapter demonstrated why we need monad transformers and the basic type manipulation that's going on to make that bit o' magick happen. Monad transformers are important in a lot of everyday Haskell code, though, so we want to dive deeper and make sure we have a good understanding of how to use them in practice. Even after you know how to write all the transformer instances, managing stacks of transformers in an application can be tricky. The goal of this chapter is to get comfortable with it in a way that a basic introduction alone could not have accomplished. In this chapter, we:
- work through more monad transformer types and instances;
- look at the ordering and wrapping of monad transformer stacks;
- lift, lift, lift, and lift some more with MonadTrans;
- see practical examples of monad transformers in real code and how to translate between different approaches;
- learn how to fix broken code that uses monad transformers but doesn't type-check;
- learn a means of avoiding manual lifting.
This chapter concerns the ways in which Haskell programs are evaluated. We aim to give you enough information about Haskell's evaluation strategies that you'll be able to reason confidently about the reduction process of various expressions and programs and introduce stricter evaluation where that is wanted. Specifically, we will:
- define call-by-name and call-by-need evaluation;
- explain the main points of nonstrict evaluation;
- live the Thunk Life1;
- consider the runtime behavior of non-strict code in terms of sharing;
- develop methods for observing sharing and measuring program efficiency;
- bottom out with the bottoms.
We love you, Jesse!
Performance and data structures
Data structures are kind of important. Insofar as computers are fast, they aren't getting much faster - the CPU isn't at least. This is usually a lead-in for a parallelism/concurrency sales pitch. But this isn't that book.
This chapter is here to help you choose the optimal data structures for your programs. We can't prescribe one or the other of similar data structures because how effective they are will depend a lot on what you're trying to do. So, our first step will be to give you tools to measure for yourself how different structures will perform in your context. This includes learning how to profile time and memory as well as benchmarking your code. We'll also cover some of the mistakes that can cause your memory usage and evaluation time to explode. This chapter will:
- demonstrate how to measure the usage of time and space in your programs;
- demonstrate how to use GHC's profiling tools for memory and time;
- demonstrate how to use Criterion for benchmarking with with proper statistical significance;
- offer guidelines on when weak head normal form or normal form are appropriate when benchmarking code;
- define constant applicative forms and explain argument saturation;
- explain the relationship between unpacking and strictness;
- demonstrate and critically evaluate when to use different data structures in different circumstances;
- demonstrate how to use mutable datatypes and ST to make our code more-fasterer and discuss when it's appropriate to do so;
- sacrifice some jargon for the jargon gods.
Let's talk about IO.
We've used the IO type at various times throughout the book, with cursory explanations of what it's all about. You no doubt know that we use this type in Haskell as a means of keeping our chocolate separate from our peanut butter. Perhaps you're wondering how it all works, what's underneath that opaque type. To many people, IO seems mysterious.
Most explanations of the IO type in Haskell don't help much, either. They may seem designed to mess with the reader's head rather than convey anything useful. Don't look now, but there's somebody writing an explanation of IO right now that uses van Laarhoven Free Monads and costate comonad coalgebras to explain something much simpler than either topic.
We're not going to do any of that here, but we are going to try to demystify IO a bit. The important thing about IO is that it's a special kind of datatype that CIA CENSORED. You got a taste of this with our short introduction to the ST type in the previous chapter. In this chapter, we will:
- explain why most explanations of IO are useless and how you can critically examine them;
- explain how IO works operationally and what it should mean to you when you read a type that has IO in it;
- provide a bit more detail and intuition for how you should think about the IO Functor, Applicative, and Monad;
- show you how to use a function you probably shouldn't ever use and for which we'll have to beg forgiveness for the next decade.
Let's face it: in the execution of a program, a lot of things can happen, not all of them expected or desired. In those unhappy times when things have not gone as we wanted them to, we will throw or raise an exception. The term exception refers to the condition that has interrupted the expected execution of the program. Encountering an exception causes an error, or exception, message to appear, informing you that due to some condition you weren't prepared for, the execution of the program has halted in an unfortunate way.
Letting exceptions arise as they will — and the program halt willy-nilly — is suboptimal. Exception handling is a way of dealing with errors and giving the program some alternate means of execution or termination should one arise. This chapter is going to cover both exceptions and what they look like as well as various means of handling them.
In this chapter, we will:
- examine the Exception typeclass and methods;
- dip our toes into existential quantification;
- discuss ways of handling exceptions.
For our final project, we're doing something a little weird, but small and that can be modernized a bit from the original design. Surely no one who knows us from Twitter or IRC will be surprised that we've chosen something eccentric for this.