1 | (require 'nvm) |
Docker for Mac: the Missing Manual
Under the hood, Docker for Mac is running an Alpine Linux virtual machine. This guide helps with issues related to communication between OS X/macOS and this VM, and running up against limits on the size of the disk allocated to the VM.
Speeding things up
Disable sync on flush
This speeds up write operations involving containers. The tradeoff is increased risk of data loss: pending writes will be lost if your computer, Docker, or a container crashes. Since Docker for Mac is used for development, not production, this may be a good tradeoff to make. Here's how:
References:
- https://gist.github.com/mkrakauer-rio/e7d9de75f5ac680e790365748ca188a4
- https://dzone.com/articles/docker-for-mac-performance-tweaks-1
overlay2 storage engine
If you installed Docker for Mac a while ago, it's probably using the aufs storage engine. overlay2 is a newer, more performant storage engine. From https://docs.docker.com/engine/userguide/storagedriver/selectadriver/#docker-ce:
When possible, overlay2 is the recommended storage driver. When installing Docker for the first time, overlay2 is used by default. Previously, aufs was used by default when available, but this is no longer the case.
On existing installations using aufs, it will continue to be used.
Elsewhere, this page says:
Docker for Mac and Docker for Windows are intended for development, rather than production. Modifying the storage driver on these platforms is not possible.
But this is not true: you can use overlay2 with Docker for Mac.
How I made Magit fast again
TLDR: here are the settings that made the biggest difference for me. YMMV.
1 | (setq magit-commit-show-diff nil |
Magit is awesome. And it's getting better with regular releases, a more consistent interface, and much more. But since the release of 2.1, it's generally been slower for me. I'm not the only one. In particular, the status buffer would take multiple seconds to refresh after almost any action such as committing, checking out a branch, stashing/popping, deleting a file, etc.
Tarsius, Magit's maintainer, is clearly aware of the performance issues and working to fix them. It can't be easy to diagnose performance problems given the multitude of ways Magit can be configured, plus the huge variety of characteristics among all the git repos out there. Nonetheless, I'm sure performance will improve in future versions.
But I needed to do something about it in the near term. I searched online and
while Magit has a
page dedicated to perf settings,
none of them helped me much. So I grepped the Magit source for defcustom
and
read all the docstrings in search of things to try. Here's what I found.
How HTML5 sandboxes could be so much more useful
I love the idea of HTML5 iframe sandboxes but I'm unable to apply them the way they are currently implemented.
Why? Because the iframes I want to target are created programmatically. I'm not
writing the iframe
tags myself, or the code that writes them, so I can't
specify the sandbox
attribute.
www.spanishdict.com is an ad-supported site, like many other sites out there. We use Google Publisher Tags (GPT) which creates a cross-origin iframe for each ad slot on the page -- automatically.
We have relationships with several different ad companies. On every pageview, we load a script from each company so they can evaluate the impression and make their bid (a technique known as "header bidding").
Then the winner of the bid gets to load their creative, which they typically do by creating another cross-origin iframe within the iframe that GPT created and has given them access to. Sometimes you'll see several layers of iframe nesting.
These iframes are all created programmatically. Because I don't create the
iframes (or know anything about them ahead of time such as what domains they're
going to come from), I can't add the sandbox
attribute, which makes it useless
for me.
In my ideal world
Ideally, I could set sandboxing to apply to the cross-origin iframes I know are going to be created on my page.
Why I'd like Node and io.js to merge
I made nodegovernance.io — which encourages Node users to express support for using io.js's open governance model as the basis for the Node foundation's technical committee — because from where I sit, one Node with a technical committee composed of the best technical people is the ideal outcome of this situation.
My life would be easier if there were just one Node. My team wouldn't have to
spend time discussing and deciding which to use. There wouldn't be any confusion
in the future about whether npm install
would give me a module I could use
with whatever runtime I happened to be using.
The recent Medium post said that io.js met with Joyent CEO Scott Hammond last week, so I'm sure he knows exactly where they stand.
And the big companies that are already on the foundation board are certainly going to get input into the rules for the technical committtee.
What about the rest of us — engineers who use Node / io.js every day but aren't inner core, who don't work for an IBM or a PayPal, who want stability and also language improvements that will make our jobs easier?
I hated the idea of looking back a few months from now, in the midst of heated
discussions about whether to use Node or io.js, with npm install
s failing all around me, and wondering if there was
anything I could have done to prevent this situation. A tweet doesn't count for
all that much, but it's more than nothing.
@williamjohnbert Thanks William. The TC will consider that when they define the governance model.
— Scott Hammond (@Scott_Hammond) February 13, 2015
Using Github Pages to hand off a legacy site and make everyone happier
Here's how I turned over maintenance of a legacy site -- built as a one-off project years ago using now outdated technology -- to my non-techincal cofounder, with only a few hours of work. Best of all, it now uses evergreen technology that will make it easy for her to update for years to come, and everyone is happy with the outcome.
Using a Node repl in Emacs with nvm and npm
Running a repl inside Emacs is often convenient for evaluating code, checking syntax, and myriad other tasks. When I wanted to run a Node REPL, I found that I needed to do a little set up to get everything working the way I wanted.
My first question was: which Node? With nvm, I've installed multiple version on my machine. So I needed a way to specify one to execute.
Another question was: where to run Node? Since npm
looks inside node_modules
directories starting with the current directory and
working up the file system hierarchy, the current working directory is
important. If I want access to the npm modules installed for project A, I need
to start my repl's Node process from path/to/projectA
.
But that raises another question: what happens when I want to switch to project
B? Do I need to use process.chdir()
to switch the Node repl's current working
directory to path/to/projectB
? That's clumsy and annoying.
Here's how I answered these questions:
Towards 100% Uptime with Node
In December, I gave a talk at Nova Node called "Towards 100% Uptime with Node.js". I wrote an accompanying blog post for the Fluencia / SpanishDict engineering blog: The 4 Keys to 100% Uptime with Node.js.
Hopefully these resources will be useful for other Node engineers out there—they have helped us have confidence that downtime is not a problem for our users.
How legit HTTP (with an async io assist) massacred my Node workers
An uncaught exception in our Node app was causing not only one, but two and then three workers to die. (Fortunately, we hardly ever encounter uncaught exceptions. Really, just this one since launch a few months ago. We're Node studs! Right?)
The funny thing is that we're using Express, which (via Connect) wraps each request / response in a try / catch. And we use Express's error handler, which returns 500 on unhandled errors.
Another funny thing is we use cluster, which isolates workers from each other. They live in separate, solipsistic processes.
But instead of returning 500, our worker simply died. And, as if in sympathy, the rest immediately followed.
Time to get to the bottom of this. A Node stud like me can figure it out. No sweat. Right?
Allow CORS with localhost in Chrome
Today I spent some time wrestling with the notorious
same origin policy
in order to get CORS
(cross-origin resource sharing)
working in Chrome for development work I was doing between two applications
running on localhost. Setting the Access-Control-Allow-Origin
header to *
seemed to have no effect, and
this bug report
nearly led me to believe that was due to a bug in Chrome that made CORS with
localhost impossible. It's not. It turned out that I also needed some other
CORs-related headers: Access-Control-Allow-Headers
and
Access-Control-Allow-Methods
.
This (slightly generalized) snippet of Express.js middleware is what ended up working for me:
1 | app.all("/api/*", function(req, res, next) { |
With that, Chrome started making OPTIONS requests when I wanted to POST from
localhost:3001 to localhost:2002. It seems that using contentType:
application/json
for POSTs forces CORS preflighting, which surprised me since
it seems like a common case for APIs, but no matter:
1 | app.all("/api/*", function(req, res, next) { |