null
and exceptions forces developers to state explicitely what should happen when expectations aren’t met, which combined with a good compiler and a strong static type system makes the code super descriptive and rock solid.
A classic example is when you query a data structure which can be empty:
> dogs = [ "Lassie", "Scooby-Doo" ]
["Lassie","Scooby-Doo"] : List String
> dogs |> List.head
Just "Lassie" : Maybe String
> dogs = []
[] : List a
> dogs |> List.head
Nothing : Maybe a
Meaning you may handle Maybe
, Maybe.map
, Maybe.andThen
, Maybe.withDefault
and so on if you want to ensure you handle the uncertainty of actually holding a value:
> ["Lassie"]
|> List.head
|> Maybe.map String.toUpper
|> Maybe.withDefault "oh no ;("
"LASSIE" : String
> []
|> List.head
|> Maybe.map String.toUpper
|> Maybe.withDefault "oh no ;("
"oh no ;(" : String
Same goes with Result
, which is basically a Maybe
with an alternate value — typically an error — attached:
> findDog name =
List.filter ((==) name)
>> List.head
>> Result.fromMaybe ("oh no, can't find " ++ name)
<function> : String -> List String -> Result String String
> ["Lassie", "Scooby-Doo"]
|> findDog "Scooby-Doo"
Ok "Scooby-Doo" : Result String String
> ["Lassie", "Scooby-Doo"]
|> findDog "Rintintin"
Err ("oh no, can't find Rintintin") : Result String String
So really, Result
is super useful. Now it’s so useful that sometimes, you want to use it a lot, eg. in records1:
type alias Dog = String
type alias Error = String
type alias FavoriteDogs =
{ dogSlot1 : Result Error Dog
, dogSlot2 : Result Error Dog
, dogSlot3 : Result Error Dog
, dogSlot4 : Result Error Dog
, dogSlot5 : Result Error Dog
, dogSlot6 : Result Error Dog
}
Hmm wait, imagine you’re only interested in a FavoriteDogs
record when all six available slots are fulfilled. Checking for this is going to be painful:
showDogs : FavoriteDogs -> Html msg
showDogs favorites =
case favorites.dogSlot1 of
Ok dog1 ->
case favorites.dogSlot2 of
Ok dog2 ->
case dogSlot2 of
Ok dog2 ->
-- To be continued… At some point
-- we can use dog1, dog2 -> dog6
Err error ->
Html.text error
Err error ->
Html.text error
Err error ->
Html.text error
Luckily we have the Result.map
familly of functions:
firstTwoDogs : FavoriteDogs -> Result Error Dog
firstTwoDogs { dogSlot1, dogSlot2 } =
Result.map2
(\dog1 dog2 -> dog1 ++ " and " ++ dog2)
dogSlot1
dogSlot2
firstThreeDogs : FavoriteDogs -> Result Error Dog
firstThreeDogs { dogSlot1, dogSlot2, dogSlot3 } =
Result.map3
(\dog1 dog2 dog3 ->
String.join ", " [ dog1, dog2, dog3 ]
)
dogSlot1
dogSlot2
dogSlot3
But wait, we don’t have Result.map6
! The core implementation of Result.map5
is pretty verbose already, I can understand why they avoided going further haha. But more annoyingly, that means you don’t have a convenient helper for mapping more than 5 Result
s at once, for example to build a record having 6.
Also, ideally we’d rather want to deal with a data structure with direct access, to avoid messing around too much with the Result
api:
type alias FavoriteDogs =
{ dogSlot1 : Dog
, dogSlot2 : Dog
, dogSlot3 : Dog
, dogSlot4 : Dog
, dogSlot5 : Dog
, dogSlot6 : Dog
}
Here’s a convenient helper I use to build a record using the pipeline builder pattern; it’s often known in functional languages as apply
, but I like resolve
:
resolve : Result x a -> Result x (a -> b) -> Result x b
resolve result =
Result.andThen (\partial -> Result.map partial result)
Which can be shortened even further — though becoming less explicit2 — with:
resolve : Result x a -> Result x (a -> b) -> Result x b
resolve =
Result.map2 (|>)
This little helper allows creating a fully-qualified FavoriteDogs
record this way:
build : Result Error FavoriteDogs
build =
Ok FavoriteDogs
|> resolve (findDog "Lassie" dogs)
|> resolve (findDog "Toto" dogs)
|> resolve (findDog "Trakr" dogs)
|> resolve (findDog "Laïka" dogs)
|> resolve (findDog "Balto" dogs)
|> resolve (findDog "Jofi" dogs)
You might have already seen this pattern used in the popular elm-json-decode-pipeline package.
The cool thing with this approach is that if a single result fails, the whole operation fails with the error of the first failure encountered during the build process:
dogs : List Dog
dogs =
[ "Lassie", "Toto", "Trakr", "Laïka", "Balto", "Jofi" ]
findDog : Dog -> List Dog -> Result Error Dog
findDog name =
List.filter ((==) name)
>> List.head
>> Result.fromMaybe ("oh no, can't find " ++ name)
type alias FavoriteDogs =
{ dogSlot1 : Dog
, dogSlot2 : Dog
, dogSlot3 : Dog
, dogSlot4 : Dog
, dogSlot5 : Dog
, dogSlot6 : Dog
}
buildOk : Result Error FavoriteDogs
buildOk =
Ok FavoriteDogs
|> resolve (findDog "Lassie" dogs)
|> resolve (findDog "Toto" dogs)
|> resolve (findDog "Trakr" dogs)
|> resolve (findDog "Laïka" dogs)
|> resolve (findDog "Balto" dogs)
|> resolve (findDog "Jofi" dogs)
-- Gives:
-- Ok
-- { dogSlot1 = "Lassie"
-- , dogSlot2 = "Toto"
-- , dogSlot3 = "Trakr"
-- , dogSlot4 = "Laïka"
-- , dogSlot5 = "Balto"
-- , dogSlot6 = "Jofi"
-- }
buildErr : Result Error FavoriteDogs
buildErr =
Ok FavoriteDogs
|> resolve (findDog "Lassie" dogs)
|> resolve (findDog "Toto" dogs)
|> resolve (findDog "Garfield" dogs) -- woops!
|> resolve (findDog "Laïka" dogs)
|> resolve (findDog "Balto" dogs)
|> resolve (findDog "Jofi" dogs)
-- Gives:
-- Err ("oh no, can't find Garfield")
That’s all folks, hope it’s useful.
This post has been written in one hour tops more than that with all the feeddback received. This is an attempt at forcing myself to write again on this blog, just don’t judge me too harsh!
Thanks to Alexis, Ethan, Mathieu, Mathieu and Rémy for their precious feedback.
Thanks to elm-search, I could find that the elm-result-extra package provides andMap
, which allows exactly the same thing as my resolve
helper.
For the sake of simplicity and disambiguation, we’re aliasing Dog
and Error
as strings here. This is not recommended practice, you should rather use opaque types instead. ↩
The type signature and implementation of resolve
might be hard to grasp for the non-seasoned Elm developer; this section of the Elm Guide may be a good read. ↩
We recently published elm-daterange-picker, a date range picker written in Elm. It was the perfect occasion to investigate what a reasonable API for a reusable stateful view component would look like.
Many component/widget-oriented Elm packages feature a rather raw Elm Architecture (TEA) API, directly exposing Model
, Msg(..)
, init
, update
and view
, so you can basically import what defines an actual application and embed it within your own application.
With these, you usually end up writing things like this:
import Counter
type alias Model =
{ counter : Counter.Model
, value : Maybe Int
}
type Msg
= CounterMsg Counter.Msg
init : () -> ( Model, Cmd Msg )
init _ =
( { counter = Counter.init, value = Nothing }
, Cmd.none
)
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
CounterMsg counterMsg ->
let
( newCounterModel, newCounterCommands ) =
Counter.update counterMsg
in
( { model
| counter = newCounterModel
, value =
case counterMsg of
Counter.Apply value ->
Just value
_ ->
Nothing
}
, newCommands |> Cmd.map CounterMsg
)
view : Model -> Html Msg
view model =
div []
[ Counter.view model.counter
|> Html.map CounterMsg
, text (String.fromInt model.value)
]
This certainly works, but let’s be frank for a minute and admit this is super verbose and not very developer friendly:
Cmd.map
and Html.map
here and thereCounter.Msg
to intercept whatever event interests you…Counter
exposes all Msg
s, which are implementation details you now rely on.There’s another way, which Evan explained in his now deprecated elm-sortable-table package. Among the many good points he has, one idea stroke me as brilliantly simple yet effective to simplify such stateful view components API design:
State updates can be managed right from event handlers!
Let’s imagine a simple counter; what if when clicking the increment button, instead of calling onClick
with some Increment
message, we would call a user-provided one with the new counter state updated accordingly?
-- Counter.elm
view : (Int -> msg) -> Int -> Html msg
view toMsg counter =
button [ onClick (toMsg (counter + 1)) ]
[ text "increment" ]
Or if you want to use an opaque type, which is an excellent idea for maintaining the smallest API surface area:
-- Counter.elm
type State
= State Int
view : (State -> msg) -> State -> Html msg
view toMsg (State value) =
button [ onClick (toMsg (State (value + 1))) ]
[ text "increment" ]
Note that as we’re dealing with a counter state, we didn’t bother having anything else than a simple Int
for representing it. But you could of course have a record or anything you want.
Handling internal state update could be just creating internal and unexposed Msg
and update
functions:
-- Counter.elm
type State
= State Int
type Msg
= Dec
| Inc
update : Msg -> Int -> Int
update msg value =
case msg of
Dec ->
value - 1
Inc ->
value + 1
view : (State -> msg) -> State -> Html msg
view toMsg (State value) =
div []
[ button [ onClick (toMsg (State (update Dec value))) ]
[ text "decrement" ]
, button [ onClick (toMsg (State (update Inc value))) ]
[ text "increment" ]
]
We should also expose helpers to retrieve (or set) values from the opaque State
type:
-- Counter.elm
getValue : State -> Int
getValue (State value) =
value
So for instance, to use this Counter
component in your own application, you just have to write this:
import Counter
type alias Model =
{ counter : Counter.State
, value : Maybe Int
}
type Msg
= CounterChanged Counter.State
init : () -> ( Model, Cmd Msg )
init _ =
( { counter = Counter.init, value = Nothing }
, Cmd.none
)
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
CounterChanged state ->
( { model | counter = state, value = Counter.getValue state }
, Cmd.none
)
view : Model -> Html Msg
view model =
div []
[ Counter.view CounterChanged model.counter
, text (String.fromInt model.value)
]
Notice how our update
function is dramatically simpler to write and to understand. Also, no need to import (and rely) a lot from the package module, which makes it both easier to consume & maintain thanks to to the opaque State
type encapsulating implementation details.
Of course a counter wouldn’t be worth creating a package for it, though this may highlight the concept better. Don’t hesitate reading elm-daterange-picker’s source code and demo code to look at a real world application of this design principle.
]]>Sometimes in Elm you struggle with the most basic things.
Especially when you come from a JavaScript background, where chaining HTTP requests are relatively easy thanks to Promises. Here’s a real-world example leveraging the Github public API, where we fetch a list of Github events, pick the first one and query some user information from its unique identifier.
The first request uses the https://api.github.com/events
endpoint, and the retrieved JSON looks like this:
[
{
"id": "987654321",
"type": "ForkEvent",
"actor": {
"id": 1234567,
"login": "foobar",
}
},
]
I’m purposely omitting a lot of other properties from the records here, for brevity.
The second request we need to do is on the https://api.github.com/users/{login}
endpoint, and its body looks like this:
{
"id": 1234567,
"login": "foobar",
"name": "Foo Bar",
}
Again, I’m just displaying a few fields from the actual JSON body here.
So we basically want:
actor.login
property,Using JavaScript, that would look like this:
fetch("https://api.github.com/events")
.then(responseA => {
return responseA.json()
})
.then(events => {
if (events.length == 0) {
throw "No events."
}
const { actor : { login } } = events[0]
return fetch(`https://api.github.com/users/${login}`)
})
.then(responseB => {
return responseB.json()
})
.then(user => {
if (!user.name) {
console.log("unspecified")
} else {
console.log(user.name)
}
})
.catch(err => {
console.error(err)
})
It would get a little fancier using async/await
:
try {
const responseA = await fetch("https://api.github.com/events")
const events = await responseA.json()
if (events.length == 0) {
throw "No events."
}
const { actor: { login } } = events[0]
const responseB = await fetch(`https://api.github.com/users/${login}`)
const user = await responseB.json()
if (!user.name) {
console.log("unspecified")
} else {
console.log(user.name)
}
} catch (err) {
console.error(err)
}
This is already complicated code to read and understand, and it’s tricky to do using Elm as well. Let’s see how to achieve the same, understanding exactly what we’re doing (we’ve all blindly copied and pasted code in the past, don’t deny).
First, let’s write the two requests we need; one for fetching the list of events, the second to obtain a given user’s details from her login
:
import Http
import Json.Decode as Decode
eventsRequest : Http.Request (List String)
eventsRequest =
Http.get "https://api.github.com/events"
(Decode.list (Decode.at [ "actor", "login" ] Decode.string))
nameRequest : String -> Http.Request String
nameRequest login =
Http.get ("https://api.github.com/users/" ++ login)
(Decode.at [ "name" ]
(Decode.oneOf
[ Decode.string
, Decode.null "unspecified"
]
)
)
These two functions return Http.Request
with the type of data they’ll retrieve and decode from the JSON body of their respective responses. nameRequest
handles the case where Github users don’t have entered their full name yet, so the name
field might be a null
; as with the JavaScript version, we then default to "unspecified"
.
That’s good but now we need to execute and chain these two requests, the second one depending on the result of the first one, where we retrieve the actor.login
value of the event object.
Elm is a pure language, meaning you can’t have side effects in your functions (a side effect is when functions alter things outside of their scope and use these things: an HTTP request is a huge side effect). So your functions must return something that represents a given side effect, instead of executing it within the function scope itself. The Elm runtime will be in charge of actually performing the side effect, using a Command.
In Elm, you’re usually going to use a Task to describe side effects. Tasks may succeed or fail (like Promises do in JavaScript), but they need to be turned into an [Elm command] to be actually executed.
To quote this excellent post on Tasks:
I find it helpful to think of tasks as if they were shopping lists. A shopping list contains detailed instructions of what should be fetched from the grocery store, but that doesn’t mean the shopping is done. I need to use the list while at the grocery store in order to get an end result
But why do we need to convert a Task
into a command you may ask? Because a command can execute a single thing at a time, so if you need to execute multiple side effects at once, you’ll need a single task that represents all these side effects.
So basically:
Http.Request
s,Task
s we can chain,Task
into a command,The Http package provides Http.toTask
to map an Http.Request
into a Task
. Let’s use that here:
fetchEvents : Task Http.Error (List String)
fetchEvents =
eventsRequest |> Http.toTask
fetchName : String -> Task Http.Error String
fetchName login =
nameRequest login |> Http.toTask
I created these two simple functions mostly to focus on their return types; a Task
must define an error type and a result type. For example, fetchEvents
being an HTTP task, it will receive an Http.Error
when the task fails, and a list of strings when the task succeeds.
But dealing with HTTP errors in a granular way being out of scope of this blog post, and in order to keep things as simple and concise as possible, I’m gonna use Task.mapError
to turn complex HTTP errors into their string representations:
toHttpTask : Http.Request a -> Task String a
toHttpTask request =
request
|> Http.toTask
|> Task.mapError toString
fetchEvents : Task String (List String)
fetchEvents =
toHttpTask eventsRequest
fetchName : String -> Task String String
fetchName login =
toHttpTask (nameRequest login)
Here, toHttpTask
is a helper turning an Http.Request
into a Task
, transforming the Http.Error
complex type into a serialized, purely textual version of it: a String
.
We’ll also need a function allowing to extract the very first element of a list, if any, as we did in JavaScript using events[0]
. Such a function is builtin the List
core module as List.head
. And let’s make this function a Task
too, as that will ease chaining everything together and allow us to expose an error message when the list is empty:
pickFirst : List String -> Task String String
pickFirst logins =
case List.head logins of
Just login ->
Task.succeed login
Nothing ->
Task.fail "No events."
Note the use of Task.succeed
and Task.fail
, which are approximately the Elm equivalents of Promise.resolve
and Promise.reject
: this is how you create tasks that succeed or fail immediately.
So in order to chain all the pieces we have so far, we obviously need glue. And this glue is the Task.andThen
function, which can chain our tasks this fancy way:
fetchEvents
|> Task.andThen pickFirst
|> Task.andThen fetchName
Neat. But wait. As we mentioned previously, Tasks are descriptions of side effects, not their actual execution. The Task.attempt
function will help us doing that, by turning a Task
into a Command, provided we define a Msg
that will be responsible of dealing with the received result:
type Msg
= Name (Result String String)
Result String String
reflects the result of the HTTP request and shares the same type definitions for both the error (a String
) and the value (the user full name, a String
too). Let’s use this Msg
with Task.attempt
:
fetchEvents
|> Task.andThen pickFirst
|> Task.andThen fetchName
|> Task.attempt Name
Here:
Name
message.The cool thing here is that if anything fails along the chain, the chain stops and the error will be propagated down to the Name
handler. No need to check errors for each operation! Yes, that looks a lot like how JavaScript Promises’ .catch
works.
Now, how are we going to execute the resulting command and process the result? We need to setup the Elm Architecture and its good old update
function:
module Main exposing (main)
import Html exposing (..)
import Http
import Json.Decode as Decode
import Task exposing (Task)
type alias Model =
{ name : Maybe String
, error : String
}
type Msg
= Name (Result String String)
eventsRequest : Http.Request (List String)
eventsRequest =
Http.get "https://api.github.com/events"
(Decode.list (Decode.at [ "actor", "login" ] Decode.string))
nameRequest : String -> Http.Request String
nameRequest login =
Http.get ("https://api.github.com/users/" ++ login)
(Decode.at [ "name" ]
(Decode.oneOf
[ Decode.string
, Decode.null "unspecified"
]
)
)
toHttpTask : Http.Request a -> Task String a
toHttpTask request =
request
|> Http.toTask
|> Task.mapError toString
fetchEvents : Task String (List String)
fetchEvents =
toHttpTask eventsRequest
fetchName : String -> Task String String
fetchName login =
toHttpTask (nameRequest login)
pickFirst : List String -> Task String String
pickFirst events =
case List.head events of
Just event ->
Task.succeed event
Nothing ->
Task.fail "No events."
init : ( Model, Cmd Msg )
init =
{ name = Nothing, error = "" }
! [ fetchEvents
|> Task.andThen pickFirst
|> Task.andThen fetchName
|> Task.attempt Name
]
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
Name (Ok name) ->
{ model | name = Just name } ! []
Name (Err error) ->
{ model | error = error } ! []
view : Model -> Html Msg
view model =
div []
[ if model.error /= "" then
div []
[ h4 [] [ text "Error encountered" ]
, pre [] [ text model.error ]
]
else
text ""
, p [] [ text <| Maybe.withDefault "Fetching..." model.name ]
]
main =
Html.program
{ init = init
, update = update
, subscriptions = always Sub.none
, view = view
}
That’s for sure more code than with the JavaScript example, but don’t forget that the Elm version renders HTML, not just logs in the console, and that the JavaScript code could be refactored to look a lot like the Elm version. Also the Elm version is fully typed and safeguarded against unforeseen problems, which makes a huge difference when your application grows.
As always, an Ellie is publicly available so you can play around with the code.
]]>I recently had to introduce some Elm concepts to a coworker who had some experience with React and Redux. One of these concepts was List.foldl, a reduction function which exists in many languages, specifically as Array#reduce in JavaScript.
The coworker was struggling to understand the whole concept, so I tried to use a metaphor; I came with the idea of a Ferris wheel next to a lake, with someone in one of its basket holding a bucket, filling the basket with water from the lake everytime the basket is back to the ground.
Yeah, I know.
So as he was starring at me like I was a crazy person, and as I knew he did use React and Redux in the past, I told him it was like the reducer functions he probably used already.
We started writing a standard Redux reducer in plain js:
He was like “oh yeah, I know that”. Good! We could use that function iteratively:
Or using Array#reduce
:
So I could use the Ferris wheel metaphor again:
state
represents the state of the wheel basket (and the quantity of water in it)init
is the initial state of the wheel basket (it contains no water yet)actions
are the list of operations to proceed each time the basket reaches the ground again (here, filling the basket with water from the lake, sometimes emptying the basket)For the records, yes my coworker was still very oddly looking at me.
We moved on and decided to reimplement the same thing in Elm, using foldl
. Its type signature is:
Wow, that looks complicated, especially when you’re new to Elm.
In Elm, type signatures separate each function arguments and the return value with an arrow (->
); so, let’s decompose the one for foldl
:
(a -> b -> b)
, the first argument, means we want a function, taking two arguments typed a
and b
and returning a b
. That sounds a lot like our reducer
function in JavaScript! If so, a
is an action, and b
a state.b
, is the initial state we start reducing our list of actions from.List a
, is our list of actions.b
, hence a new state. We have the exact definition of what we’re after.Actually our own use of foldl
would have been much more obvious if we initially saw this, replacing a
by Action
and b
by State
:
Note: if you’re still struggling with these
a
andb
s, you should probably read a little about Generic Types.
Our resulting minimalistic implementation was:
We quickly drafted this on Ellie. It’s not graphically impressive, but it works.
That was it, it was more obvious how to map things my coworker already knew to something new to him, while in fact it was actually exactly the same thing, expressed slightly differently from a syntax perspective.
We also expanded that the Elm Architecture and the traditional update
function was basically a projection of foldl
, Action
being usually named Msg and State
Model.
The funny thing being, Redux design itself was initially inspired by the Elm Architecture!
In conclusion, here are quick takeaways when facing something difficult to understand:
I was a Linux user 10 years ago but moved to being a Mac one, mainly because I was tired of maintaining an often broken system (hello xorg.conf), and Apple had quite an appealing offer at the time: a well-maintained Unix platform matching beautiful hardware, sought-after UX, access to editor apps like Photoshop and MS Office, so best of both worlds.
To be frank, I was a happy Apple user in the early years, then the shine started to fade; messing up your system after upgrades became more frequent, Apple apps grown more and more bloated and intrusive (hello iTunes), UX started turning Kafkaian at times, too often I was finding myself tweaking and repairing stuff from the terminal…
The trigger was pulled when Apple announced their 2015 MacBook line, with strange connectivity decisions like having a unique port for everything and using dongles: meh. If even their top notch hardware started to turn weird, it was probably time to look elsewhere. And now I see their latest MBP line with the Esc key removed (so you can’t escape anymore, haha), I’m kinda comforted in my decision.
Meanwhile, since I’ve joined Mozilla and the Storage team, I could see many colleagues happily using Linux, and it didn’t feel like they were struggling with anything particular. Oddly enough, it seemed they were capable of working efficiently, both for professional and personal stuff.
I finally took the plunge and ordered a Lenovo X1 Carbon, then started my journey to being a Linux user again.
I didn’t debate this for days, I installed the latest available Ubuntu right away as it was the distribution I was using before moving to OSX (I even contributed to a book on it!). I was used to Debian-based systems and knew Ubuntu was still acclaimed for its ease of use and great hardware support. I wasn’t disappointed as on the X1 everything was recognized and operational right after the installation, including wifi, bluetooth and external display.
I was greeted with the Unity desktop, which was disturbing as I was a Gnome user back in the days. Up to a point I installed the latter, though in its version 3 flavor, which was also new to me.
I like Gnome3. It’s simple, configurable and made me feel productive fast. Though out of bad luck or skills and time to spend investigating, a few things were not working properly: fonts were huge in some apps and normal in others, external display couldn’t be configured to a different resolution and dpi ratio than my laptop’s, things like that. After a few weeks, I switched back to Unity, and I’m still happily using it today as it has nicely solved all the issues I had with Gnome (which I still like a lot though).
Let’s be honest, the Apple keyboard French layout is utter crap, but as many things involving muscle memory, once you’re used to it, it’s a pain in the ass to readapt to anything else. I struggled for something like three weeks fighting old habits in this area, then eventually got through.
Last, a bunch of OSX apps are not available on Linux, so you have to find their equivalent, when they exist. The good news is, most often they do.
What also changed in last ten years is the explosion of the Web as an application platform. While LibreOffice and The Gimp are decent alternatives to MS Office and Photoshop, you now have access to many similarly scoped Web apps like Google Docs and Pixlr, provided you’re connected to the Internet. Just ensure using a modern Web browser like Firefox, which luckily ships by default in Ubuntu.
For example I use IRCCloud for IRC, as Mozilla has a corporate account there. The cool thing is it acts as a bouncer so it keeps track of messages when you go offline, and has a nice Android app which syncs.
There is obviously lots of things Web apps can’t do, like searching your local files or updating your system. And let’s admit that sometimes for specific tasks native apps are still more efficient and better integrated (by definition) than what the Web has to offer.
I was a hardcore Alfred.app user on OSX. On Linux there’s quite no strict equivalent though Unity Dash, Albert or synapse can cover most of its coolness.
If you use the text shortcuts feature of Alfred (or if you use TextExpander), you might be interested in AutoKey as well.
I couldn’t spot any obvious usability difference between Nautilus and the OSX Finder, but I mostly use their basic features anyway.
To emulate Finder’s QuickLook, sushi does a proper job.
The switch shouldn’t be too hard as most popular editors are available on Linux: Sublime Text, Atom, VSCode and obviously vim and emacs.
I was using iTerm2 on OSX, so I was happy to find out about Terminator, which also supports tiling & split panes.
Unity provides a classic alt+tab
switcher and an Exposé-style overview, just like OSX.
I’ve been a super hardcore Lightroom user and lover, but eventually found Darktable and am perfectly happy with it now. Its ergonomics take a little while to get used to though.
If you want to get an idea of what kind of results it can produce, take a look at my NYC gallery on 500px, fwiw all the pictures have been processed using DarkTable.
Disclaimer: if you find these pictures boring or ugly, it’s probably me and not DarkTable.
For things like cropping & scaling images, The Gimp does an okay job.
For organizing & managing a gallery, ShotWell seems to be what many people use nowadays, though I’m personally happy just using my file manager somehow.
Ah the good old days when you only had Gnome Solitaire to have a little fun on Linux. Nowadays even Steam is available for Linux, with more and more titles available. That should get you covered for a little while.
If it doesn’t, PlayOnLinux allows running Windows games on Wine. Most of the time, it works just fine.
I’ve been a Spotify user & customer for years, and am very happy with the Linux version of its client.
I’m using a Bose Mini SoundLink over bluetooth and never had any issues pairing and using it. To be 100% honest, PulseAudio crashed a few times but the system has most often been able to recover and enable sound again without any specific intervention from me.
Byt the way, it’s not always easy to switch between audio sources; Sound Switcher Indicator really helps by adding a dedicated menu in the top bar:
I’m definitely not an expert in the field but have sometimes needs for quickly crafting short movies for friends and family. kdenlive has just done its job perfectly so far for me.
While studying password managers for work lately, I’ve stumbled upon Enpass, it’s a good equivalent of 1Password which doesn’t have a Linux version of their app. Enpass has extensions for the most common browsers, and can sync to Dropbox or Owncloud among other cloud services.
I was using Dropbox and CrashPlan on OSX, guess what? I’m using them on Linux too.
ScreenCloud allows making screenshots, annotate them and export them to different targets like the filesystem or online image hosting providers like imgur or DropBox.
Diodon is a simple yet efficient clipboard manager, exposing a convenient menu in the system top bar.
If you know f.lux, RedShift is an alternative to it for Linux. The program will adapt the tint of your displays to the amount of light at this specific time of the day. Recommended.
Caffeine is a status bar application able to temporarily prevent the activation of both the screensaver and the sleep powersaving mode. Most useful when watching movies.
For me, the answer is yes.
I’ve been asked several questions by email, IRC, twitter and in the HN thread about this post, here are some answers in a random order.
Lenovo X1 Carbon 3rd Gen.
No.
Obviously worse than a MacBook (where controlled hardware & drivers are heavily optimized for that purpose), but not that bad tbh. I can work for max 5 hours straight, though if I start compiling stuff (hello gecko) it gets really bad.
No, I tried to use Fingerprint-GUI but it was so unstable that I removed it. I’m easy typing passphrases anyway.
That sounds rather ambitious, and I didn’t feel like installing all these KDE/Qt packages for trying it out. From the captures I could find online, it looks like a great option though.
Yeah. Also I’ve learned that f.lux was inspired by Redshift and not the other way around. Point taken, thanks.
DarkTable is free. Also, its keystones-based perspective correction module is much better than anything I could find for LightRoom.
But yeah, overall LightRoom is way ahead, and if Adobe was kind enough to port it to Linux I’d buy and use it in a heartbeat.
Do you often fire DarkTable to edit a screenshot?
Good for you! Diversity is nice.
Haha, nice try.
I’m using Vivacious Dark in its graphite variant.
It’s the standard Unity one with the icon borders removed.
]]>Let’s take things like map
and reduce
from the Array
prototype:
function square(x) {
return x * x;
}
function sum(x, y) {
return x + y;
}
[1, 2, 3].map(square).reduce(sum)
// 14
I’ve been hearing a few times things like:
Well yeah that’s cool, but I don’t do maths, I’m a Web developer.
And each time it turns me a little sad.
As we’re programming language hipsters, in this article we’ll use ES6 short function syntax which has landed a few weeks ago in Firefox Nightlies and eases a lot writing code in the functional style:
var square = x => x * x;
var sum = (x, y) => x + y;
[1, 2, 3].map(square).reduce(sum)
// 14
We’ll also use other ES6 features as well because, you know, today is our future already.
This article contents will also probably hurt some people feelings, probably because there’s a lot to hate in there when you come from a pure OOP landscape. Please think of this article as an exercise of thought instead of yet another new JavaScript tutorial™.
Take this DOM fragment featuring a good ol’ data table as an example:
<table>
<thead>
<tr>
<th>Country</th>
<th>Population (M)</th>
<th>GNP (B)</th>
</tr>
</thead>
<tbody>
<tr><td>Belgium</td><td>11.162</td><td>419</td></tr>
<tr><td>France</td><td>63.820</td><td>2246</td></tr>
<tr><td>Germany</td><td>80.640</td><td>3139</td></tr>
<tr><td>Greece</td><td>10.758</td><td>298</td></tr>
<tr><td>Italy</td><td>59.789</td><td>1871</td></tr>
<tr><td>Netherlands</td><td>16.795</td><td>713</td></tr>
<tr><td>Poland</td><td>38.548</td><td>782</td></tr>
<tr><td>Portugal</td><td>10.609</td><td>252</td></tr>
<tr><td>United Kingdom</td><td>64.231</td><td>2290</td></tr>
<tr><td>Spain</td><td>46.958</td><td>1432</td></tr>
</tbody>
</table>
To map the country names to a regular array of strings:
var rows = document.querySelectorAll("tbody tr");
[].map.call(rows, row => row.querySelector("td").textContent);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]
It worked: our map operation transformed a list of DOM table row elements to the text value of their very first cell. Well, it feels like we could probably enhance the code ergonomics a bit here.
Note: If you wonder why do we use
[].map.call
in lieu of just callingmap
from the element list prototype, that’s becauseNodeList
doesn’t implement theArray
interface… Yeah, I know.
As an illustrative exercise, let’s write our own map
function to make a passed
iterable always exposing the Array
interface; also, let’s invert the order of
passed args to ease further composability (more on this later):
const map = (fn, iterable) => [].map.call(iterable, fn);
Note: we declare
map
as a constant to avoid any accidental mess. Also, I don’t see obvious reasons for a function to be mutated here.
So we can write:
var rows = document.querySelectorAll("tbody tr");
map(row => row.querySelector("td").textContent, rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]
As a side note, this map
implementation also works for strings:
map(x => x.toUpperCase(), "foo");
// ["F", "O", "O"]
We can also write a tiny abstraction on top of querySelectorAll
, again to
ensure further composability:
const nodes = (sel, root) => (root || document).querySelectorAll(sel);
So now we can write:
var rows = nodes("tbody tr");
map(node => nodes("td", node)[0].textContent, rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]
Hmm, the operations being performed within the function passed to map
(finding
a first child node, getting an element property value) sound like things we’re
most likely to do many times while extracting information from the DOM. And then
we’d probably want better code semantics as well.
For starters, let’s create a first()
function for finding the first element
out of a collection:
const first = iterable => iterable[0];
// first([1, 2, 3]) => 1
Our example becomes:
map(node => first(nodes("td", node)).textContent, rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]
In the same vein, we could use a prop()
higher order function —
basically a function returning a function — one more time to create a reusable &
composable property getter (we’ll get back to this, read on):
const prop = name => object => object[name];
// const getFoo = prop("foo");
// getFoo({foo: "bar"}) => "bar"
If you struggle understanding how this works, this is how we would write prop
using current function syntax:
function prop(name) {
return function(object) {
return object[name];
};
}
Let’s use our new property getter generator:
const getText = prop("textContent");
map(node => getText(first(nodes("td", node))), rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]
Now, how about having a generic for finding a node’s child elements from a selector? Let’s do this:
const finder = selector => root => nodes(selector, root);
const findCells = finder("td");
findCells(document.querySelector("table")).length
// 30
Don’t panic, again this is how we’d write it using standard function declaration syntax:
function finder(selector) {
return function(root) {
return nodes(selector, root);
}
}
Let’s use it:
const getText = prop("textContent");
const findCells = finder("td");
map(node => getText(first(findCells(node))), rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]
At this point, you may be wondering how this is possibly improving code readability and maintainability… Now is the perfect time to use function composition (you waited for it), to aggregate & chain minimal bits of reusable code.
Note: If you’re familiar with the UNIX philosophy, that’s exactly the same approach as when using the pipe operator:
$ ls -la | awk '{print $2}' | grep pattern | wc -l
Let’s create a sequence
function to help composing functions sequentially:
const sequence = function() {
return [].reduce.call(arguments, function(comp, fn) {
return () => comp(fn.apply(null, arguments));
});
};
This one is a bit complicated; it basically takes all functions passed as arguments and returns a new function capable of processing them sequencially, passing to each the result of the previous execution:
const squarePlus2 = sequence(x => 2 + x, x => x * x);
squarePlus2(4);
// 4 * 4 + 2 => 18 => Aspirine is in the bathroom
In classic notation without using a sequence, that would be the equivalent of:
function plus2(x) {
return 2 + x;
}
function square(x) {
return x * x;
}
function squarePlus2(x) {
return plus2(square(x));
}
squarePlus2(4);
// 18
By the way, sequence
is a very good place to use ES6 Rest Arguments which
have also landed recently in Gecko; let’s rewrite it accordingly:
const sequence = function(...fns) {
return fns.reduce(function(comp, fn) {
return (...args) => comp(fn.apply(null, args));
});
};
Let’s use it in our little DOM crawling example:
const getText = prop("textContent");
const findCells = finder("td");
map(sequence(getText, first, findCells), rows)
// ["Belgium", "France", "Germany", "Greece", "Italy", …]
What I like the most about the FP style is that it actually describes fairly well what’s going to happen; you can almost read the code as you’d read plain English (caveat: don’t do this at family dinners).
Also you may want to have the functions passed in the opposite order, ala UNIX
pipes, which usually enhances legibility a bit for seasonned functional
programmers; let’s create a compose
function for doing just that:
const compose = (...fns) => sequence.apply(null, fns.reverse());
map(compose(findCells, first, getText), rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]
As a side note, one may argue that:
map(sequence(getText, first, findCells), rows);
Is not much really better than:
map(row => getText(first(findCells(row))), rows);
Though the composed approach is probably more likely to scale when adding many more functions to the stack:
a(b(c(d(e(f(g(h(foo))))))));
sequence(a, b, c, d, e, f, g, h)(foo);
Last, a composed function is itself composable by essence, and that’s probably a killer feature:
map(sequence(getText, sequence(first, findCells)), rows);
// ["Belgium", "France", "Germany", "Greece", "Italy", …]
Which something like this:
var crawler = new Crawler("table");
crawler.findCells("tbody tr").first().getText();
Is hardly likely to offer.
To compute the total population of listed countries:
const reduce = (fn, init, iterable) => [].reduce.call(iterable, fn, init);
const second = (iterable) => iterable[1];
const sum = (x, y) => x + y;
var populations = map(compose(findCells, second, getText, parseFloat),
rows);
reduce(sum, 0, populations);
// 403.31000000000006
To generate a JSON export of the whole table data:
const partial = (fn, ...r) => (...a) => fn.apply(null, r.concat(a))
const nth = n => (iterable) => iterable[n - 1];
const third = nth(3);
const getTexts = partial(map, getText);
const asObject = (data) => ({
name: first(data),
population: parseFloat(second(data)),
gnp: parseFloat(third(data))
});
var countries = map(compose(findCells, getTexts, asObject), rows);
JSON.stringify(countries);
// "[{"name":"Belgium","population":11.162,"gnp":419}, …
To compute the global average GNP per capita for these countries:
const perCapita = c => ({name: c.name, perCapita: c.gnp / c.population});
var gnpPerCapita = map(perCapita, countries);
JSON.stringify(gnpPerCapita);
// "[{"name":"Belgium","perCapita":37.5380756136893}, …
To filter countries having more than n€
of GNP per capita, sort them by
descending order and export the result as JSON:
const select = (fn, iterable) => [].filter.call(iterable, fn)
const sort = (fn, iterable) => [].sort.call(iterable, fn);
const sortDesc = partial(sort, (a, b) => a.perCapita > b.perCapita ? -1 : 1);
const healthy = partial(select, c => c.perCapita > 38);
const healthyCountries = compose(healthy, sortDesc);
JSON.stringify(healthyCountries(gnpPerCapita));
// "[{"name":"Netherlands","perCapita":42.45311104495385}, …
I could probably go on and on, but you get the picture. This post is not to claim that the FP approach is the best of all in JavaScript, but that it certainly has its advantages. Feel free to play with these concepts for a while to make your mind, eventually :)
If you’re interested in Functional JavaScript, I suggest the following resources:
If you’re interested in ECMAScript 6, here are some good links to read about:
The idea behind code coverage is to record which parts of your code (functions, statements, conditionals and so on) have been executed by your test suite, to compute metrics out of these data and usually to provide tools for navigating and inspecting them.
Not a lot of frontend developers I know actually test their frontend code, and I can barely imagine how many of them have ever setup code coverage… Mostly because there are not many frontend-oriented tools in this area I guess.
Actually I’ve only found one which provides an adapter for Mocha and actually works…
Drinking game for web devs:
— Shay Friedman (@ironshay) August 22, 2013
(1) Think of a noun
(2) Google "<noun>.js"
(3) If a library with that name exists - drink
Blanket.js is an easy to install, easy to configure, and easy to use JavaScript code coverage library that works both in-browser and with nodejs.
Its use is dead easy, adding Blanket support to your Mocha test suite is just matter of adding this simple line to your HTML test file:
<script src="vendor/blanket.js"
data-cover-adapter="vendor/mocha-blanket.js"></script>
Source files: blanket.js, mocha-blanket.js
As an example, let’s reuse the silly Cow
example we used in a previous episode:
// cow.js
(function(exports) {
"use strict";
function Cow(name) {
this.name = name || "Anon cow";
}
exports.Cow = Cow;
Cow.prototype = {
greets: function(target) {
if (!target)
throw new Error("missing target");
return this.name + " greets " + target;
}
};
})(this);
And its test suite, powered by Mocha and Chai:
var expect = chai.expect;
describe("Cow", function() {
describe("constructor", function() {
it("should have a default name", function() {
var cow = new Cow();
expect(cow.name).to.equal("Anon cow");
});
it("should set cow's name if provided", function() {
var cow = new Cow("Kate");
expect(cow.name).to.equal("Kate");
});
});
describe("#greets", function() {
it("should greet passed target", function() {
var greetings = (new Cow("Kate")).greets("Baby");
expect(greetings).to.equal("Kate greets Baby");
});
});
});
Let’s create the HTML test file for it, featuring Blanket and its adapter for Mocha:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Test</title>
<link rel="stylesheet" media="all" href="vendor/mocha.css">
</head>
<body>
<div id="mocha"></div>
<div id="messages"></div>
<div id="fixtures"></div>
<script src="vendor/mocha.js"></script>
<script src="vendor/chai.js"></script>
<script src="vendor/blanket.js"
data-cover-adapter="vendor/mocha-blanket.js"></script>
<script>mocha.setup('bdd');</script>
<script src="cow.js" data-cover></script>
<script src="cow_test.js"></script>
<script>mocha.run();</script>
</body>
</html>
Notes:
data-cover
attribute we added to the script tag loading the
source of our library;Running the tests now gives us something like this:
As you can see, the report at the bottom highlights that we haven’t actually tested the case where an error is raised in case a target name is missing. We’ve been informed of that, nothing more, nothing less. We simply know we’re missing a test here. Isn’t this cool? I think so!
Just remember that code coverage will only bring you numbers and raw information, not actual proofs that the whole of your code logic has been actually covered. If you ask me, the best inputs you can get about your code logic and implementation ever are the ones issued out of pair programming sessions and code reviews — but that’s another story.
So is code coverage silver bullet? No. Is it useful? Definitely. Happy testing!
]]>For the 4 past months, I’ve been working for Mozilla on some big project where such testing strategy was involved. While I wish we could use CasperJS in this perspective, Firefox wasn’t supported at the time and we needed to ensure proper compatibility with its JavaScript engine. So we went with using Mocha, Chai and Sinon and they have proven to be a great workflow for us so far.
Mocha is a test framework while Chai is an expectation one. Let’s say Mocha setups and describes test suites and Chai provides convenient helpers to perform all kinds of assertions against your JavaScript code.
So let’s say we have a Cow
object we want to unit test:
// cow.js
(function(exports) {
"use strict";
function Cow(name) {
this.name = name || "Anon cow";
}
exports.Cow = Cow;
Cow.prototype = {
greets: function(target) {
if (!target)
throw new Error("missing target");
return this.name + " greets " + target;
}
};
})(this);
Nothing fancy, but we want to unit test this one.
Both Mocha and Chai can be used in a Node environment as well as within the browser; in the latter case, you’ll have to setup a test HTML page and use special builds of those libraries:
My advice is to store these files in a vendor
subfolder. Let’s create a HTML file to test our lib:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Cow tests</title>
<link rel="stylesheet" media="all" href="vendor/mocha.css">
</head>
<body>
<div id="mocha"><p><a href=".">Index</a></p></div>
<div id="messages"></div>
<div id="fixtures"></div>
<script src="vendor/mocha.js"></script>
<script src="vendor/chai.js"></script>
<script src="cow.js"></script>
<script>mocha.setup('bdd')</script>
<script src="cow_test.js"></script>
<script>mocha.run();</script>
</body>
</html>
Please note we’ll be using Chai’s BDD Expect API, hence the mocha.setup('bdd')
call here.
Now let’s write a simple test suite for our Cow
object constructor in cow_test.js
:
var expect = chai.expect;
describe("Cow", function() {
describe("constructor", function() {
it("should have a default name", function() {
var cow = new Cow();
expect(cow.name).to.equal("Anon cow");
});
it("should set cow's name if provided", function() {
var cow = new Cow("Kate");
expect(cow.name).to.equal("Kate");
});
});
describe("#greets", function() {
it("should throw if no target is passed in", function() {
expect(function() {
(new Cow()).greets();
}).to.throw(Error);
});
it("should greet passed target", function() {
var greetings = (new Cow("Kate")).greets("Baby");
expect(greetings).to.equal("Kate greets Baby");
});
});
});
Tests should be passing, so if you open the HTML document in your browser, you should get something like:
If any of these expectations fails, you’ll be notified in the test results, eg. if we change the implementation of greets
as below:
Cow.prototype = {
greets: function(target) {
if (!target)
throw new Error("missing target");
return this.name + " greets " + target + "!";
}
};
You’ll get this instead:
Now imagine we implement a Cow#lateGreets
method so the greetings come with a delay of one second:
Cow.prototype = {
greets: function(target) {
if (!target)
throw new Error("missing target");
return this.name + " greets " + target + "!";
},
lateGreets: function(target, cb) {
setTimeout(function(self) {
try {
cb(null, self.greets(target));
} catch (err) {
cb(err);
}
}, 1000, this);
}
};
We need to test this one as well, and Mocha helps us with its optional done
callback for tests:
describe("#lateGreets", function() {
it("should pass an error if no target is passed", function(done) {
(new Cow()).lateGreets(null, function(err, greetings) {
expect(err).to.be.an.instanceof(Error);
done();
});
});
it("should greet passed target after one second", function(done) {
(new Cow("Kate")).lateGreets("Baby", function(err, greetings) {
expect(greetings).to.equal("Kate greets Baby");
done();
});
});
});
Conveniently, Mocha will highlight any suspiciously long operation with red pills in case it wasn’t really expected:
When you do unit testing, you don’t want to depend on stuff external to the unit of code under test. And while avoiding your functions to have side effects is usually a good practice, in Web development it’s not always easy task (think DOM, Ajax, native browser APIs, etc.)
Sinon is a great JavaScript library for stubbing and mocking such external dependencies and to keep control on side effects against them.
As an example, let’s imagine that our Cow#greets
method wouldn’t return a string but rather directly log them onto the browser console:
// cow.js
(function(exports) {
"use strict";
function Cow(name) {
this.name = name || "Anon cow";
}
exports.Cow = Cow;
Cow.prototype = {
greets: function(target) {
if (!target)
return console.error("missing target");
console.log(this.name + " greets " + target);
}
};
})(this);
How to unit test this? Well, Sinon to the rescue! First, let’s add the Sinon script to our HTML test file:
<!-- ... -->
<script src="vendor/mocha.js"></script>
<script src="vendor/chai.js"></script>
<script src="vendor/sinon-1.7.1.js"></script>
We’ll stub the console
object’s log
and error
methods so we can check they’re called and what’s passed to them:
var expect = chai.expect;
describe("Cow", function() {
var sandbox;
beforeEach(function() {
// create a sandbox
sandbox = sinon.sandbox.create();
// stub some console methods
sandbox.stub(window.console, "log");
sandbox.stub(window.console, "error");
});
afterEach(function() {
// restore the environment as it was before
sandbox.restore();
});
// ...
describe("#greets", function() {
it("should log an error if no target is passed in", function() {
(new Cow()).greets();
sinon.assert.notCalled(console.log);
sinon.assert.calledOnce(console.error);
sinon.assert.calledWithExactly(console.error, "missing target")
});
it("should log greetings", function() {
var greetings = (new Cow("Kate")).greets("Baby");
sinon.assert.notCalled(console.error);
sinon.assert.calledOnce(console.log);
sinon.assert.calledWithExactly(console.log, "Kate greets Baby")
});
});
});
Several things to be noticed here:
beforeEach
and afterEach
are part of the Mocha API and allow to define setup and tear down operations for each test;sinon.assert
calls; a sinon-chai plugin exists for Chai, you may want to have a look at it.There are many cool other aspects of Mocha, Chai and Sinon I couldn’t cover in this blog post, but I hope it opened your appetite for investigating more about them. Happy testing!
]]>I’m happy to annouce the immediate availability of CasperJS 1.1-beta1, featuring support for SlimerJS, which basically ports the PhantomJS API onto the Gecko platform.
Yes, that means as of 1.1-beta1 you can run most of your existing CasperJS scripts against a headless Firefox (using a virtual framebuffer for now), thanks to the huge amount of effort provided by Laurent Jouanneau, a long-time XUL/Gecko contributor.
This is great news for all Web developers wanting to avoid contributing to the establishment of a monoculture.
1.1-beta1 brings a whole lot more other features, you may want to read the full CHANGELOG.
]]>I’ve recently open-sourced hubot-mood, a hubot script to store a team’s mood and get some metrics about it. We’re using it at Scopyleft.
Moods are stored in redis through the node-redis library, which uses asynchronous calls to perform operations on the redis backend.
So typically, to store an entry, you do something like the following:
function store(mood, cb) {
redis.rpush("moods", mood, function(err) {
cb(err, mood);
});
}
store("2013-02-01:n1k0:sunny", function(err, mood) {
if (err) throw err;
console.log("stored mood entry: " + mood);
});
Classic. But what if you want to perform multiple insertions, eg. to load a bunch of fixtures for your tests? I’m using mocha here:
describe("moods test", function() {
// fixtures
var moods = [
"2013-02-01:n1k0:sunny"
, "2013-02-02:n1k0:cloudy"
, "2013-02-03:n1k0:stormy"
, "2013-02-04:n1k0:rainy"
// … we could add many more
];
it("should do something useful with moods", function(done) {
store(moods[0], function(err, mood) {
assert.ifError(err);
store(moods[1], function(err, mood) {
assert.ifError(err);
store(moods[2], function(err, mood) {
assert.ifError(err);
store(moods[3], function(err, mood) {
assert.ifError(err);
// now let's test stuff with stored moods
done();
});
});
});
});
});
});
Here we go again, callback hell and unmanageable pyramids.
Async.js is a node library to help dealing with asynchronicity and flatten pyramids. A npm install async
later, we’re ready to go:
describe("moods tests", function() {
var moods = [
"2013-02-01:n1k0:sunny"
, "2013-02-02:n1k0:cloudy"
, "2013-02-03:n1k0:stormy"
, "2013-02-04:n1k0:rainy"
// … we could add many more
];
it("should do something useful with moods", function(done) {
async.parallel([
function(cb) {
store(mood[0], function(err, mood) {
cb(err, mood);
});
},
function(cb) {
store(mood[1], function(err, mood) {
cb(err, mood);
});
},
function(cb) {
store(mood[2], function(err, mood) {
cb(err, mood);
});
},
function(cb) {
store(mood[3], function(err, mood) {
cb(err, mood);
});
},
], function(err, moods) {
assert.ifError(err);
// now let's test stuff with stored moods
done();
});
});
});
Indeed, this is definitely not DRY code. But one has to be creative to turn a tool into an efficient solution; let’s invoke the powers of Array#map
to build the required callback functions out of our moods
array:
function load(fixtures, onComplete) {
async.parallel(fixtures.map(function(fixture) {
return function(cb) {
store(fixture, function(err, result) {
cb(err, result);
});
};
}), onComplete);
}
describe("moods tests", function() {
var moods = [
"2013-02-01:n1k0:sunny"
, "2013-02-02:n1k0:cloudy"
, "2013-02-03:n1k0:stormy"
, "2013-02-04:n1k0:rainy"
// … we could add many more
];
it("should do something useful with moods", function(done) {
load(moods, function(err, storedMoods) {
assert.ifError(err);
// now let's test stuff with stored moods
done();
});
});
});
Edit: there’s even a built-in async.map()
function, not sure how I missed it; so the code is even shorter:
describe("moods tests", function() {
var moods = [
"2013-02-01:n1k0:sunny"
, "2013-02-02:n1k0:cloudy"
, "2013-02-03:n1k0:stormy"
, "2013-02-04:n1k0:rainy"
// … we could add many more
];
it("should do something useful with moods", function(done) {
async.map(moods, store, function(err, storedMoods) {
assert.ifError(err);
// now let's test stuff with stored moods
done();
});
});
});
Async.js is a great package and one of the most popular of the node ecosystem, but there are many others.
Such a library combined with a functional approach provides a killer combo to solve your daily problems when programming JavaScript.
]]>