Allow CORS with localhost in Chrome

Today I spent some time wrestling with the notorious same origin policy in order to get CORS (cross-origin resource sharing) working in Chrome for development work I was doing between two applications running on localhost. Setting the Access-Control-Allow-Origin header to * seemed to have no effect, and this bug report nearly led me to believe that was due to a bug in Chrome that made CORS with localhost impossible. It's not. It turned out that I also needed some other CORs-related headers: Access-Control-Allow-Headers and Access-Control-Allow-Methods.

This (slightly generalized) snippet of Express.js middleware is what ended up working for me:

1
2
3
4
5
6
app.all("/api/*", function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Cache-Control, Pragma, Origin, Authorization, Content-Type, X-Requested-With");
res.header("Access-Control-Allow-Methods", "GET, PUT, POST");
return next();
});

With that, Chrome started making OPTIONS requests when I wanted to POST from localhost:3001 to localhost:2002. It seems that using contentType: application/json for POSTs forces CORS preflighting, which surprised me since it seems like a common case for APIs, but no matter:

1
2
3
4
5
6
app.all("/api/*", function(req, res, next) {
if (req.method.toLowerCase() !== "options") {
return next();
}
return res.send(204);
});

Emacs cl-lib madness

Emacs 24.3 renamed the Common Lisp emulation package from cl to cl-lib. The release notes say that cl in 24.3 is now "a bunch of aliases that provide the old, non-prefixed names", but I encountered some problems with certain packages searching for--as best I can determine--function names that at some point changed but were not kept around as aliases. This was particularly problematic when trying to run 24.3 on OS X 10.6.8.

In case anyone else runs into this problem, here's my solution:

1
2
3
4
5
6
7
8
;; Require Common Lisp. (cl in <=24.2, cl-lib in >=24.3.)
(if (require 'cl-lib nil t)
(progn
(defalias 'cl-block-wrapper 'identity)
(defalias 'member* 'cl-member)
(defalias 'adjoin 'cl-adjoin))
;; Else we're on an older version so require cl.
(require 'cl))

We try to require cl-lib, and when that succeeds, define some aliases so that packages don't complain about missing cl-block-wrapper, member*, and adjoin. If it doesn't succeed, we're on an older Emacs, so require the old cl.

Juxtaposition

A few days ago, I happened by chance to read these two articles one after the other:

The first is about how good Unix is at scaling the scheduling and distribution of work among processes. The second is about how Unix is the problem when it comes to the scheduling and distribution of work at scale.

The question, of course, is "What scale?". Like the difference between a cure and a poison is sometimes the dosage.

Review of Requests 1.0

Author's note: This piece was originally published in the excellent literary journal DIAGRAM, Issue 12.6. I'm re-publishing here for formatting reasons.

Identification with another is addictive: some of my life's most profound, memorable experiences have come when something bridged the gap between me and another human. Because I'm a reader, this can occur across the distance of space and time. It's happened with minor Chekov characters, and at the end of Kate Mansfield stories. It happens again and again with Norman Rush and George Saunders. The author has pushed a character through the page and connected with me on a deep level: identification.

Identification happens with computer programming, too.

I say this as a reader, writer, and programmer: I experience identification when reading and programming, and I strive to create it when writing and programming.

Though they deal with the messiness of reality differently, several techniques common to both disciplines enable them to achieve this mental intimacy: navigating complexity; avoiding pitfalls that inhibit communication; choosing structure wisely; harnessing expressive power; and inhabiting other minds. The Requests library, a work of computer programming by Kenneth Reitz, illustrates this.

Read More

A Case Study of Node.js in Production

I'm giving a talk about my experience developing and deploying a Node.js web service in production at the next Nova-Node meetup, October 30 at 6:30 p.m. Below is the writeup. If it sounds interesting to you, come by!

SpanishDict recently deployed a new text-to-speech service powered by Node. This service can generate audio files on the fly for arbitrary Spanish and English texts with rapid response times. The presentation will walk through the design, development, testing, monitoring, and deployment process for the new application. We will cover topics like how to structure an Express app, testing and debugging, learning to think in streams and pipes, writing a Chef cookbook to deploy to AWS, and monitoring the application for high performance. The lead engineer on the project, William Bert, will also talk about his experiences transitioning from a Python background to Node and some of the key insights he had about writing in Node while developing the application.

Update: here are the slides from the talk.

(Relatively) quick and easy Gensim example code

Here's some sample code that shows the basic steps necessary to use gensim to create a corpus, train models (log entropy and latent semantic analysis), and perform semantic similarity comparisons and queries.

gensim has an excellent tutorial, and this does not replace reading and understanding it. Nonetheless, this may be helpful for those interested in doing some quick experimentation and getting their hands dirty fast. It takes you from training corpus to index and queries in about 100 lines of code, much of which is documentation.

Note that this code will not work out of the box. To train the models, you need to provide your own background corpus (a collection of documents, where a document can range from one sentence up to multiple pages of text). Choosing a good corpus is an art; generally, you want tens of thousands of documents that are representative of your problem domain. Like the gensim tutorial, this code also shows how to build a corpus from Wikipedia for experimentation, though note that doing so require a lot of computing time. You could potentially save hours by installing accelerated BLAS on your system.

Read More

An Introduction to gensim: "Topic Modelling for Humans"

On Tuesday, I presented at the monthly DC Python meetup. My talk was an introduction to gensim, a free Python framework for topic modelling and semantic similarity using LSA/LSI and other statistical techniques. I've been using gensim on and off for several months at work, and I really appreciate its performance, clean API design, documentation, and community. (All of this is due to its creator, Radim Rehurek, who I interviewed recently.)

The presentation slides are available here. I also wrote some quick gensim example code that walks through creating a corpus, generating and transforming models, and using models to do semantic similarity. The code and slides are both also available on my github account.

Finally, I also developed a demo app to visualize semantic similarity queries. It's a Flask web app, with gensim generating data on the backend that is clustered by scipy and scikit-learn and visualized by d3.js as agglomerative and hierarchical clusters as well as a simple table and dendrogram. To make it all work in realtime, I used threading and hookbox. I call it Visularity, and it's available on github. You need to provide your own model and dictionary data to use--check out my presentation and visit radimrehurek.com/gensim/ to learn how. Comments and feedback welcome!

Interview with Radim Rehurek, creator of gensim

Tomorrow at the May 2012 DC Python meetup, I'm giving a talk on gensim, a Python framework for topic modeling that I use at work and on my own for semantic similarity comparisons. (I'll post the slides and example code for the talk soon.) I've found gensim to be a useful and well-designed tool, and pretty much all credit for it goes to its creator, Radim Rehurek. Radim was kind enough to answer a few questions I sent him about gensim's history and goals, and about his background and interests.

WB: Why did you create gensim?

RR: Consulting gig for a digital library project (Czech Digital Mathematics Library, dml.cz), some 3 years ago. It started off as a few loosely connected Python scripts to support the "show similar articles" functionality. We wanted to use some of the statistical methods, like latent semantic analysis. Originally, gensim only contained wrappers around existing Fortran libraries for SVD, like Propack and Svdpack.

But there were issues with that, and it scaled badly (all documents in RAM), so I started looking for more scalable, online algorithms. Running these popular methods shouldn't be so hard, I thought!

In the end, I developed new algorithms for these methods for gensim. The theoretical part of this research later turned into a part of my PhD thesis.

Read More