How I made Magit fast again

TLDR: here are the settings that made the biggest difference for me. YMMV.

1
2
(setq magit-commit-show-diff nil
magit-revert-buffers 1)

Magit is awesome. And it's getting better with regular releases, a more consistent interface, and much more. But since the release of 2.1, it's generally been slower for me. I'm not the only one. In particular, the status buffer would take multiple seconds to refresh after almost any action such as committing, checking out a branch, stashing/popping, deleting a file, etc.

Tarsius, Magit's maintainer, is clearly aware of the performance issues and working to fix them. It can't be easy to diagnose performance problems given the multitude of ways Magit can be configured, plus the huge variety of characteristics among all the git repos out there. Nonetheless, I'm sure performance will improve in future versions.

But I needed to do something about it in the near term. I searched online and while Magit has a page dedicated to perf settings, none of them helped me much. So I grepped the Magit source for defcustom and read all the docstrings in search of things to try. Here's what I found.

Read More

How HTML5 sandboxes could be so much more useful

I love the idea of HTML5 iframe sandboxes but I'm unable to apply them the way they are currently implemented.

Why? Because the iframes I want to target are created programmatically. I'm not writing the iframe tags myself, or the code that writes them, so I can't specify the sandbox attribute.

www.spanishdict.com is an ad-supported site, like many other sites out there. We use Google Publisher Tags (GPT) which creates a cross-origin iframe for each ad slot on the page -- automatically.

unsandbox iframes in unsandboxed iframes

We have relationships with several different ad companies. On every pageview, we load a script from each company so they can evaluate the impression and make their bid (a technique known as "header bidding").

Then the winner of the bid gets to load their creative, which they typically do by creating another cross-origin iframe within the iframe that GPT created and has given them access to. Sometimes you'll see several layers of iframe nesting.

These iframes are all created programmatically. Because I don't create the iframes (or know anything about them ahead of time such as what domains they're going to come from), I can't add the sandbox attribute, which makes it useless for me.

In my ideal world

Ideally, I could set sandboxing to apply to the cross-origin iframes I know are going to be created on my page.

Read More

Why I'd like Node and io.js to merge

I made nodegovernance.io — which encourages Node users to express support for using io.js's open governance model as the basis for the Node foundation's technical committee — because from where I sit, one Node with a technical committee composed of the best technical people is the ideal outcome of this situation.

My life would be easier if there were just one Node. My team wouldn't have to spend time discussing and deciding which to use. There wouldn't be any confusion in the future about whether npm install would give me a module I could use with whatever runtime I happened to be using.

The recent Medium post said that io.js met with Joyent CEO Scott Hammond last week, so I'm sure he knows exactly where they stand.

And the big companies that are already on the foundation board are certainly going to get input into the rules for the technical committtee.

What about the rest of us — engineers who use Node / io.js every day but aren't inner core, who don't work for an IBM or a PayPal, who want stability and also language improvements that will make our jobs easier?

I hated the idea of looking back a few months from now, in the midst of heated discussions about whether to use Node or io.js, with npm installs failing all around me, and wondering if there was anything I could have done to prevent this situation. A tweet doesn't count for all that much, but it's more than nothing.

Using a Node repl in Emacs with nvm and npm

Running a repl inside Emacs is often convenient for evaluating code, checking syntax, and myriad other tasks. When I wanted to run a Node REPL, I found that I needed to do a little set up to get everything working the way I wanted.

My first question was: which Node? With nvm, I've installed multiple version on my machine. So I needed a way to specify one to execute.

Another question was: where to run Node? Since npm looks inside node_modules directories starting with the current directory and working up the file system hierarchy, the current working directory is important. If I want access to the npm modules installed for project A, I need to start my repl's Node process from path/to/projectA.

But that raises another question: what happens when I want to switch to project B? Do I need to use process.chdir() to switch the Node repl's current working directory to path/to/projectB? That's clumsy and annoying.

Here's how I answered these questions:

Read More

How legit HTTP (with an async io assist) massacred my Node workers

An uncaught exception in our Node app was causing not only one, but two and then three workers to die. (Fortunately, we hardly ever encounter uncaught exceptions. Really, just this one since launch a few months ago. We're Node studs! Right?)

The funny thing is that we're using Express, which (via Connect) wraps each request / response in a try / catch. And we use Express's error handler, which returns 500 on unhandled errors.

Another funny thing is we use cluster, which isolates workers from each other. They live in separate, solipsistic processes.

But instead of returning 500, our worker simply died. And, as if in sympathy, the rest immediately followed.

Time to get to the bottom of this. A Node stud like me can figure it out. No sweat. Right?

Read More

Allow CORS with localhost in Chrome

Today I spent some time wrestling with the notorious same origin policy in order to get CORS (cross-origin resource sharing) working in Chrome for development work I was doing between two applications running on localhost. Setting the Access-Control-Allow-Origin header to * seemed to have no effect, and this bug report nearly led me to believe that was due to a bug in Chrome that made CORS with localhost impossible. It's not. It turned out that I also needed some other CORs-related headers: Access-Control-Allow-Headers and Access-Control-Allow-Methods.

This (slightly generalized) snippet of Express.js middleware is what ended up working for me:

1
2
3
4
5
6
app.all("/api/*", function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Cache-Control, Pragma, Origin, Authorization, Content-Type, X-Requested-With");
res.header("Access-Control-Allow-Methods", "GET, PUT, POST");
return next();
});

With that, Chrome started making OPTIONS requests when I wanted to POST from localhost:3001 to localhost:2002. It seems that using contentType: application/json for POSTs forces CORS preflighting, which surprised me since it seems like a common case for APIs, but no matter:

1
2
3
4
5
6
app.all("/api/*", function(req, res, next) {
if (req.method.toLowerCase() !== "options") {
return next();
}
return res.send(204);
});

Emacs cl-lib madness

Emacs 24.3 renamed the Common Lisp emulation package from cl to cl-lib. The release notes say that cl in 24.3 is now "a bunch of aliases that provide the old, non-prefixed names", but I encountered some problems with certain packages searching for--as best I can determine--function names that at some point changed but were not kept around as aliases. This was particularly problematic when trying to run 24.3 on OS X 10.6.8.

In case anyone else runs into this problem, here's my solution:

1
2
3
4
5
6
7
8
;; Require Common Lisp. (cl in <=24.2, cl-lib in >=24.3.)
(if (require 'cl-lib nil t)
(progn
(defalias 'cl-block-wrapper 'identity)
(defalias 'member* 'cl-member)
(defalias 'adjoin 'cl-adjoin))

;; Else we're on an older version so require cl.
(require 'cl))

We try to require cl-lib, and when that succeeds, define some aliases so that packages don't complain about missing cl-block-wrapper, member*, and adjoin. If it doesn't succeed, we're on an older Emacs, so require the old cl.

Juxtaposition

A few days ago, I happened by chance to read these two articles one after the other:

The first is about how good Unix is at scaling the scheduling and distribution of work among processes. The second is about how Unix is the problem when it comes to the scheduling and distribution of work at scale.

The question, of course, is "What scale?". Like the difference between a cure and a poison is sometimes the dosage.