Veni, vidi, vici.
26 stories
1 follower

Mad Marx: The Class Warrior

3 Comments and 14 Shares

Read the whole story
298 days ago
302 days ago
Share this story
3 public comments
303 days ago
Karl Marx of the Wasteland headshotting Ayn Rand is the single most beautiful thought I have ever been gifted with.
303 days ago
So true! All of the world's problems could be solved by Marx(ists) killing more of their opponents.
Falls Church, Virginia, USA
302 days ago
Your irony game is so strong.
303 days ago
I'd watch it

“MP3 is dead” missed the real, much better story

1 Comment and 3 Shares

If you read the news, you may think the MP3 file format was recently officially “killed” somehow, and any remaining MP3 holdouts should all move to AAC now. These are all simple rewrites of Fraunhofer IIS’ announcement that they’re terminating the MP3 patent-licensing program.

Very few people got it right. The others missed what happened last month:

If the longest-running patent mentioned in the aforementioned references is taken as a measure, then the MP3 technology became patent-free in the United States on 16 April 2017 when U.S. Patent 6,009,399, held by and administered by Technicolor, expired.

MP3 is no less alive now than it was last month or will be next year — the last known MP3 patents have simply expired.1

So while there’s a debate to be had — in a moment — about whether MP3 should still be used today, Fraunhofer’s announcement has nothing to do with that, and is simply the ending of its patent-licensing program (because the patents have all expired) and a suggestion that we move to a newer, still-patented format.

Why still use MP3 when newer, better formats exist?

MP3 is very old, but it’s the same age as JPEG, which has also long since been surpassed in quality by newer formats. JPEG is still ubiquitous not because Engadget forgot to declare its death, but because it’s good enough and supported everywhere, making it the most pragmatic choice most of the time.2

AAC and other newer audio codecs can produce better quality than MP3, but the difference is only significant at low bitrates. At about 128 kbps or greater, the differences between MP3 and other codecs are very unlikely to be noticed, so it isn’t meaningfully better for personal music collections. For new music, get AAC if you want, but it’s not worth spending any time replacing MP3s you already have.

AAC makes a lot of sense for low- and medium-quality applications where bandwidth is extremely limited or expensive, like phone calls and music-streaming services, or as sound for video, for which it’s the most widely supported format.

It may seem to make sense for podcasts, but it doesn’t. Podcasters need to distribute a single file type that’s playable on the most players and devices possible, and though AAC is widely supported today, it’s still not as widely supported as MP3. So podcasters overwhelmingly choose MP3: among the 50 million podcast episodes in Overcast’s database, 92% are MP3, and within the most popular 500 podcasts, 99% are MP3.

And AAC is also still patent-encumbered, which prevents innovation, hinders support, restricts potential uses, and imposes burdensome taxes on anything that goes near it.

So while AAC does offer some benefits, it also brings additional downsides and costs, and the benefits aren’t necessary or noticeable in some major common uses. Even the file-size argument for lower bitrates is less important than ever in a world of ever-increasing bandwidth and ever-higher relative uses of it.3

Ogg Vorbis and Opus offer similar quality advantages as AAC with (probably) no patent issues, which was necessary to provide audio options to free, open-source software and other contexts that aren’t compatible with patent licensing. But they’re not widely supported, limiting their useful applications.

Until a few weeks ago, there had never been an audio format that was small enough to be practical, widely supported, and had no patent restrictions, forcing difficult choices and needless friction upon the computing world. Now, at least for audio, that friction has officially ended. There’s finally a great choice without asterisks.

MP3 is supported by everything, everywhere, and is now patent-free. There has never been another audio format as widely supported as MP3, it’s good enough for almost anything, and now, over twenty years since it took the world by storm, it’s finally free.

  1. There’s some debate whether expirations of two remaining patents have happened yet. I’m not a patent lawyer, but the absolute latest interpretation would have the last one expire soon, on December 30, 2017. ↩︎

  2. For photos and other image types poorly suited to PNG, of course. ↩︎

  3. Suppose a podcast debates switching from 64 kbps MP3 to 48 kbps AAC. That would only save about 7 MB per hour of content, which isn’t a meaningful amount of data for most people anymore (especially for podcasts, which are typically background-downloaded on Wi-Fi). Read the Engadget and Gizmodo articles, at 3.6 and 5.2 MB, respectively, and you’ve already spent more than that difference. Watch a 5-minute YouTube video at default quality, and you’ll blow through about three times as much. ↩︎

Read the whole story
308 days ago
Share this story
1 public comment
305 days ago
MP3 is patent-free. Not dead. Still suitable for rates over 128

So you want to expose Go on the Internet

1 Comment

Back when crypto/tls was slow and net/http young, the general wisdom was to always put Go servers behind a reverse proxy like NGINX. That’s not necessary anymore!

At Cloudflare we recently experimented with exposing pure Go services to the hostile wide area network. With the Go 1.8 release, net/http and crypto/tls proved to be stable, performant and flexible.

However, the defaults are tuned for local services. In this articles we’ll see how to tune and harden a Go server for Internet exposure.


You’re not running an insecure HTTP server on the Internet in 2016. So you need crypto/tls. The good news is that it’s now really fast (as you’ve seen in a previous article on this blog), and its security track record so far is excellent.

The default settings resemble the Intermediate recommended configuration of the Mozilla guidelines. However, you should still set PreferServerCipherSuites to ensure safer and faster cipher suites are preferred, and CurvePreferences to avoid unoptimized curves: a client using CurveP384 would cause up to a second of CPU to be consumed on our machines.

	// Causes servers to use Go's default ciphersuite preferences,
	// which are tuned to avoid attacks. Does nothing on clients.
	PreferServerCipherSuites: true,
	// Only use curves which have assembly implementations
	CurvePreferences: []tls.CurveID{
		tls.X25519, // Go 1.8 only

If you can take the compatibility loss of the Modern configuration, you should then also set MinVersion and CipherSuites.

	MinVersion: tls.VersionTLS12,
	CipherSuites: []uint16{
		tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, // Go 1.8 only
		tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,   // Go 1.8 only

		// Best disabled, as they don't provide Forward Secrecy,
		// but might be necessary for some clients
		// tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
		// tls.TLS_RSA_WITH_AES_128_GCM_SHA256,

Be aware that the Go implementation of the CBC cipher suites (the ones we disabled in Modern mode above) is vulnerable to the Lucky13 attack, even if partial countermeasures were merged in 1.8.

Final caveat, all these recommendations apply only to the amd64 architecture, for which fast, constant time implementations of the crypto primitives (AES-GCM, ChaCha20-Poly1305, P256) are available. Other architectures are probably not fit for production use.

Since this server will be exposed to the Internet, it will need a publicly trusted certificate. You can get one easily and for free thanks to Let’s Encrypt and the package’s GetCertificate function.

Don’t forget to redirect HTTP page loads to HTTPS, and consider HSTS if your clients are browsers.

srv := &http.Server{
	ReadTimeout:  5 * time.Second,
	WriteTimeout: 5 * time.Second,
	Handler: http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
		w.Header().Set("Connection", "close")
		url := "https://" + req.Host + req.URL.String()
		http.Redirect(w, req, url, http.StatusMovedPermanently)
go func() { log.Fatal(srv.ListenAndServe()) }()

You can use the SSL Labs test to check that everything is configured correctly.


net/http is a mature HTTP/1.1 and HTTP/2 stack. You probably know how (and have opinions about how) to use the Handler side of it, so that’s not what we’ll talk about. We will instead talk about the Server side and what goes on behind the scenes.


Timeouts are possibly the most dangerous edge case to overlook. Your service might get away with it on a controlled network, but it will not survive on the open Internet, especially (but not only) if maliciously attacked.

Applying timeouts is a matter of resource control. Even if goroutines are cheap, file descriptors are always limited. A connection that is stuck, not making progress or is maliciously stalling should not be allowed to consume them.

A server that ran out of file descriptors will fail to accept new connections with errors like

http: Accept error: accept tcp [::]:80: accept: too many open files; retrying in 1s

A zero/default http.Server, like the one used by the package-level helpers http.ListenAndServe and http.ListenAndServeTLS, comes with no timeouts. You don’t want that.

HTTP server phases

There are three main timeouts exposed in http.Server: ReadTimeout, WriteTimeout and IdleTimeout. You set them by explicitly using a Server:

srv := &http.Server{
    ReadTimeout:  5 * time.Second,
    WriteTimeout: 10 * time.Second,
    IdleTimeout:  120 * time.Second,
    TLSConfig:    tlsConfig,
    Handler:      serveMux,
log.Println(srv.ListenAndServeTLS("", ""))

ReadTimeout covers the time from when the connection is accepted to when the request body is fully read (if you do read the body, otherwise to the end of the headers). It’s implemented in net/http by calling SetReadDeadline immediately after Accept.

The problem with a ReadTimeout is that it doesn’t allow a server to give the client more time to stream the body of a request based on the path or the content. Go 1.8 introduces ReadHeaderTimeout, which only covers up to the request headers. However, there’s still no clear way to do reads with timeouts from a Handler. Different designs are being discussed in issue #16100.

WriteTimeout normally covers the time from the end of the request header read to the end of the response write (a.k.a. the lifetime of the ServeHTTP), by calling SetWriteDeadline at the end of readRequest.

However, when the connection is over HTTPS, SetWriteDeadline is called immediately after Accept so that it also covers the packets written as part of the TLS handshake. Annoyingly, this means that (in that case only) WriteTimeout ends up including the header read and the first byte wait.

Similarly to ReadTimeout, WriteTimeout is absolute, with no way to manipulate it from a Handler (#16100).

Finally, Go 1.8 introduces IdleTimeout which limits server-side the amount of time a Keep-Alive connection will be kept idle before being reused. Before Go 1.8, the ReadTimeout would start ticking again immediately after a request completed, making it very hostile to Keep-Alive connections: the idle time would consume time the client should have been allowed to send the request, causing unexpected timeouts also for fast clients.

You should set Read, Write and Idle timeouts when dealing with untrusted clients and/or networks, so that a client can’t hold up a connection by being slow to write or read.

For detailed background on HTTP/1.1 timeouts (up to Go 1.7) read my post on the Cloudflare blog.


HTTP/2 is enabled automatically on any Go 1.6+ server if:

  • the request is served over TLS/HTTPS
  • Server.TLSNextProto is nil (setting it to an empty map is how you disable HTTP/2)
  • Server.TLSConfig is set and ListenAndServeTLS is used or
  • Serve is used and tls.Config.NextProtos includes "h2" (like []string{"h2", "http/1.1"}, since Serve is called too late to auto-modify the TLS Config)

HTTP/2 has a slightly different meaning since the same connection can be serving different requests at the same time, however, they are abstracted to the same set of Server timeouts in Go.

Sadly, ReadTimeout breaks HTTP/2 connections in Go 1.7. Instead of being reset for each request it’s set once at the beginning of the connection and never reset, breaking all HTTP/2 connections after the ReadTimeout duration. It’s fixed in 1.8.

Between this and the inclusion of idle time in ReadTimeout, my recommendation is to upgrade to 1.8 as soon as possible.

TCP Keep-Alives

If you use ListenAndServe (as opposed to passing a net.Listener to Serve, which offers zero protection by default) a TCP Keep-Alive period of three minutes will be set automatically. That will help with clients that disappear completely off the face of the earth leaving a connection open forever, but I’ve learned not to trust that, and to set timeouts anyway.

To begin with, three minutes might be too high, which you can solve by implementing your own tcpKeepAliveListener.

More importantly, a Keep-Alive only makes sure that the client is still responding, but does not place an upper limit on how long the connection can be held. A single malicious client can just open as many connections as your server has file descriptors, hold them half-way through the headers, respond to the rare keep-alives, and effectively take down your service.

Finally, in my experience connections tend to leak anyway until timeouts are in place.


Package level functions like http.Handle[Func] (and maybe your web framework) register handlers on the global http.DefaultServeMux which is used if Server.Handler is nil. You should avoid that.

Any package you import, directly or through other dependencies, has access to http.DefaultServeMux and might register routes you don’t expect.

For example, if any package somewhere in the tree imports net/http/pprof clients will be able to get CPU profiles for your application. You can still use net/http/pprof by registering its handlers manually.

Instead, instantiate an http.ServeMux yourself, register handlers on it, and set it as Server.Handler. Or set whatever your web framework exposes as Server.Handler.


net/http does a number of things before yielding control to your handlers: Accepts the connections, runs the TLS Handshake, …

If any of these go wrong a line is written directly to Server.ErrorLog. Some of these, like timeouts and connection resets, are expected on the open Internet. It’s not clean, but you can intercept most of those and turn them into metrics by matching them with regexes from the Logger Writer, thanks to this guarantee:

Each logging operation makes a single call to the Writer’s Write method.

To abort from inside a Handler without logging a stack trace you can either panic(nil) or in Go 1.8 panic(http.ErrAbortHandler).


A metric you’ll want to monitor is the number of open file descriptors. Prometheus does that by using the proc filesystem.

If you need to investigate a leak, you can use the Server.ConnState hook to get more detailed metrics of what stage the connections are in. However, note that there is no way to keep a correct count of StateActive connections without keeping state, so you’ll need to maintain a map[net.Conn]ConnState.


The days of needing NGINX in front of all Go services are gone, but you still need to take a few precautions on the open Internet, and probably want to upgrade to the shiny, new Go 1.8.

Happy serving!

Read the whole story
455 days ago
Package level functions like http.Handle[Func] (and maybe your web framework) register handlers on the global http.DefaultServeMux which is used if Server.Handler is nil. You should avoid that.

Any package you import, directly or through other dependencies, has access to http.DefaultServeMux and might register routes you don’t expect.

For example, if any package somewhere in the tree imports net/http/pprof clients will be able to get CPU profiles for your application. You can still use net/http/pprof by registering its handlers manually.

Instead, instantiate an http.ServeMux yourself, register handlers on it, and set it as Server.Handler. Or set whatever your web framework exposes as Server.Handler.
Share this story

Collecting and Reading with DEVONthink

1 Comment

I read a lot. Until recently that included Twitter. To replace that source of information, I've shifted to other, more curated, sources like my RSS reader.

I'm a big fanatic for Pinboard. It's almost completely plain, minimally styled, text on a blank page. I capture every web link that I find interesting right into Pinboard. I also use it as a reading list. It's great.1

I was asked on Twitter about using DEVONthink (DT) as an RSS reader or Pinboard alternative.2 Before Pinboard I used DEVONthink and it was great for archiving web content. I even used DT as an RSS reader for a long time after I gave up on NetNewsWire. I don't think I'd be able to switch my entire bookmarking activity to DEVONthink simply because I'm on Windows during the day and have no access to DEVONthink there. If I were Mac and iOS only, this could be a great system.

Adding an RSS feed to DEVONthink can only be done on the Mac right now. The same goes with refreshing the feed content. This is a major barrier for some folks that are iOS only. Hopefully this changes soon.

On the Mac, just add a new RSS type document and provide the feed address.

Adding a Feed

DEVONthink will download each article and display them in a nicely formatted view. Even better, if I mark the feed document with a "read_later” tag, each article will also get the same tag. While I'm reading on the Mac, I can archive the document for off-line storage and search.

Capture in the Mac App

If I'm in Safari, I can use the web clipper to capture a page in several different formats and apply tags along the way too.

Mac web clipper

When I was using DEVONthink as a feed reader and article filing system my favorite thing was the intelligence built into the app. Not only was searching very easy but it was more powerful. The boolean search operators were far more effective than anything I can now do on Pinboard. The "See Also" system in DT is unlike anything I've ever used. This was particularly useful when I had a large collection of web pages in my archive. If one page wasn't exactly what I wanted, the "See Also" score often pointed me to the right one.

See Also

Syncing from Mac to iOS works well in DEVONthink. The read status is updated and I can create archives of pages. As mentioned above, DEVONthink on iOS can not update an RSS feed document on its own. It can only sync the articles from the Mac. This is probably a bigger barrier for me than even the lack of access on Windows. I'm often on my phone and want feeds to update wherever I am.

RSS List on iOS

The article view on iOS is satisfactory. It's not as nice as a dedicated reader but it works. The real benefit is in the built in capture tools in DEVONthink To Go. While reading an article I can quickly capture it for off-line archiving right within the app. Not only are there several options but I can also edit the meta data along the way which benefits future searching.

Article Capture on iOS

Unfortunately, I don't think DEVONthink for iOS is ready to be my RSS reader. It's still a terrific place to archive pages and bookmarks though. The search operators still work in iOS and the extension is a very convenient way to capture a lot of different content. Web pages can be captured as archives or as Markdown pages with reference links back to the original source.

Clipping on iOS

After a bit of bending over backwards to make this work, I don't think I can use DEVONthink for a complete bookmarking or feed reading solution. I think it's great for selective archiving of web content that I'm confident I'll want to look up later. I use the Mac and iOS applications quite a bit when I'm researching how to build a fence or what the best new T.V. is, but I don't think I'd want to archive every web page I read in DEVONthink. I'm a bit on the fence here, though. With a few more features in the iOS app I I'd be all in with DEVONthink for bookmarking. Grouping web archives with bookmarks and text notes with a inside a folder structure is a great argument in favor of DEVONthink. I'll keep using Pinboard for bookmarks, Feedbin for RSS reading and DEVONthink for dedicated research work. We'll see what 2017 brings. For now, I enjoyed the distraction of experimentation with a long-unchanged system.

DEVONthink To Go for iOS | $15

DEVONthink Pro Office for Mac | $150

  1. I do miss Twitter for things like this but it's unhealthy for me and I question it's value as part of my life. Don't "@" me there. Email works. 

Read the whole story
491 days ago
Share this story

Performance Tuning HAProxy

1 Share

In a recent article, I covered how to tune the NGINX webserver for a simple static HTML page. In this article, we are going to once again explore those performance-tuning concepts and walk through some basic tuning options for HAProxy.

“Let’s install and configure HAProxy to act as a load balancer for two NGINX instances.”
Click To Tweet

What is HAProxy

HAProxy is a software load balancer commonly used to distribute TCP-based traffic to multiple backend systems. It provides not only load balancing but also has the ability to detect unresponsive backend systems and reroute incoming traffic.

In a traditional IT infrastructure, load balancing is often performed by expensive hardware devices. In cloud and highly distributed infrastructure environments, there is a need to provide this same type of service while maintaining the elastic nature of cloud infrastructure. This is the type of environment where HAProxy shines, and it does so while maintaining a reputation for being extremely efficient out of the box.

Much like NGINX, HAProxy has quite a few parameters set for optimal performance out of the box. However, as with most things, we can still tune it for our specific environment to increase performance.

In this article, we are going to install and configure HAProxy to act as a load balancer for two NGINX instances serving a basic static HTML site. Once set up, we are going to take that configuration and tune it to gain even more performance out of HAProxy.

“HAProxy has quite a few parameters set for optimal performance out of the box.” via @madflojo
Click To Tweet

Installing HAProxy

For our purposes, we will be installing HAProxy on an Ubuntu system. The installation of HAProxy is fairly simple on an Ubuntu system. To accomplish this, we will use the Apt package manager; specifically we will be using the apt-get command.

# apt-get install haproxy
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
Suggested packages:
  vim-haproxy haproxy-doc
The following NEW packages will be installed:
  haproxy liblua5.3-0
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 872 kB of archives.
After this operation, 1,997 kB of additional disk space will be used.
Do you want to continue? [Y/n] y

With the above complete, we now have HAProxy installed. The next step is to configure it to load balance across our backend NGINX instances.

Basic HAProxy Config

In order to set up HAProxy to load balance HTTP traffic across two backend systems, we will first need to modify HAProxy’s default configuration file /etc/haproxy/haproxy.cfg.

To get started, we will be setting up a basic frontend service within HAProxy. We will do this by appending the below configuration block.

frontend www
    bind               :80
    mode               http

Before going too far, let’s break down this configuration a bit to understand what exactly we are telling HAProxy to do.

In this section, we are defining a frontend service for HAProxy. This is essentially a frontend listener that will accept incoming traffic. The first parameter we define within this section is the bind parameter. This parameter is used to tell HAProxy what IP and Port to listen on; in this case. This means our HAProxy instance will listen for traffic on port 80 and route it through this frontend service named www.

Within this section, we are also defining the type of traffic with the mode parameter. This parameter accepts tcp or http options. Since we will be load balancing HTTP traffic, we will use the http value. The last parameter we are defining is default_backend, which is used to define the backend service HAProxy should load balance to. In this case, we will use a value of which will route traffic through our NGINX instances.

    mode     http
    balance  roundrobin
    server   nyc2 check
    server   sfo1 check

Like the frontend service, we will also need to define our backend service by appending the above configuration block to the same /etc/haproxy/haproxy.cfg file.

In this backend configuration block, we are defining the systems that HAProxy will load balance traffic to. Like the frontend section, this section also contains a mode parameter to define whether these are tcp or http backends. For this example, we will once again use http as our backend systems are a set of NGINX webservers.

In addition to the mode parameter, this section also has a parameter called balance. The balance parameter is used to define the load-balancing algorithm that determines which backend node each request should be sent to. For this initial step, we can simply set this value to roundrobin, which is used to send traffic evenly as it comes in. This setting is pretty common and often the first load balancer that users start with.

The final parameter in the backend service is server, which is used to define the backend system to balance to. In our example, there are two lines that each define a different server. These two servers are the NGINX webservers that we will load balancing traffic to in this example.

The format of the server line is a bit different than the other parameters. This is because node-specific settings can be configured via the server parameter. In the example above, we are defining a label, IP:Port, and whether or not a health check should be used to monitor the backend node.

By specifying check after the web-server’s address, we are defining that HAProxy should perform a health check to determine whether the backend system is responsive or not. If the backend system is not responsive, incoming traffic will not be routed to that backend system.

With the changes above, we now have a basic HAProxy instance configured to load balance an HTTP service. In order for these configurations to take effect however, we will need to restart the HAProxy instance. We can do that with the systemctl command.

# systemctl restart haproxy

Now that our configuration changes are in place, let’s go ahead and get started with establishing our baseline performance of HAProxy.

Baselining Our Performance

In the “Tuning NGINX for Performance” article, I discussed the importance of establishing a performance baseline before making any changes. By establishing a baseline performance before making any changes, we can identify whether or not the changes we make have a beneficial effect.

As in the previous article, we will be using the ApacheBench tool to measure the performance of our HAProxy instance. In this example however, we will be using the flag -c to change the number of concurrent HTTP sessions and the flag -n to specify the number of HTTP requests to make.

# ab -c 2500 -n 5000 -s 90
Requests per second:    97.47 [#/sec] (mean)
Time per request:       25649.424 [ms] (mean)
Time per request:       10.260 [ms] (mean, across all concurrent requests)

After running the ab (ApacheBench) tool, we can see that out of the box our HAProxy instance is servicing 97.47 HTTP requests per second. This metric will be our baseline measurement; we will be measuring any changes against this metric.

Setting the Maximum Number of Connections

One of the most common tunable parameters for HAProxy is the maxconn setting. This parameter defines the maximum number of connections the entire HAProxy instance will accept.

When calling the ab command above, I used the -c flag to tell ab to open 2500 concurrent HTTP sessions. By default, the maxconn parameter is set to 2000. This means that a default instance of HAProxy will start queuing HTTP sessions once it hits 2000 concurrent sessions. Since our test is launching 2500 sessions, this means that at any given time at least 500 HTTP sessions are being queued while 2000 are being serviced immediately. This certainly should have an effect on our throughput for HAProxy.

Let’s go ahead and raise this limit by once again editing the /etc/haproxy/haproxy.cfg file.

        maxconn         5000

Within the haproxy.cfg file, there is a global section; this section is used to modify “global” parameters for the entire HAProxy instance. By adding the maxconn setting above, we are increasing the maximum number of connections for the entire HAProxy instance to 5000, which should be plenty for our testing. In order for this change to take effect, we must once again restart the HAProxy instance using the systemctl command.

 # systemctl restart haproxy 

With HAProxy restarted, let’s run our test again.

# ab -c 2500 -n 5000 -s 90
Requests per second:    749.22 [#/sec] (mean)
Time per request:       3336.786 [ms] (mean)
Time per request:       1.335 [ms] (mean, across all concurrent requests)

In our baseline test, the Requests per second value was 97.47. After adjusting the maxconn parameter, the same test returned a Requests per second of 749.22. This is a huge improvement over our baseline test and just goes to show how important of a parameter the maxconn setting is.

When tuning HAProxy, it is very important to understand your target number of concurrent sessions per instance. By identifying and tuning this value upfront, you can save yourself a lot of troubleshooting with HAProxy performance during peak traffic load.

In this article, we set the maxconn value to 5000; however this is still a fairly low number for a high-traffic environment. As such, I would highly recommend identifying your desired number of concurrent sessions and tuning the maxconn parameter before changing any other parameter when tuning HAProxy.

Multiprocessing and CPU Pinning

Another interesting tunable for HAProxy is the nbproc parameter. By default, HAProxy has a single worker process, which means that all of our HTTP sessions will be load balanced by a single process. With the nbproc parameter, it is possible to create multiple worker processes to help distribute the workload internally.

While additional worker processes might sound good at first, they only tend to provide value when the server itself has more than 1 CPU. It is not uncommon for environments that create multiple worker processes on single CPU systems to see that HAProxy performs worse than it did as a single process instance. The reason for this is because the overhead of managing multiple worker processes provides a diminishing return when the number of workers exceeds the number of CPUs available.

With this in mind, it is recommended that the nbproc parameter should be set to match the number of CPUs available to the system. In order to tune this parameter for our environment, we first need to check how many CPUs are available. We can do this by executing the lshw command.

# lshw -short -class cpu
H/W path      Device  Class      Description
/0/401                processor  Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz
/0/402                processor  Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz

From the output above, it appears that we have 2 available CPUs on our HAProxy server. Let’s go ahead and set the nbproc parameter to 2, which will tell HAProxy to start a second worker process on restart. We can do this by once again editing the global section of the /etc/haproxy/haproxy.cfg file.

        maxconn         5000
        nbproc          2
        cpu-map         1 0
        cpu-map         2 1

In the above HAProxy config example, I included another parameter named cpu-map. This parameter is used to pin a specific worker process to the specified CPU using CPU affinity. This allows the processes to better distribute the workload across multiple CPUs.

While this might not sound very critical at first, it is when you consider how Linux determines which CPU a process should use when it requires CPU time.

Understanding CPU Affinity

The Linux kernel internally has a concept called CPU affinity, which is where a process is pinned to a specific CPU for its CPU time. If we use our system above as an example, we have two CPUs (0 and 1), a single threaded HAProxy instance. Without any changes, our single worker process will be pinned to either 0 or 1.

If we were to enable a second worker process without specifying which CPU that process should have an affinity to, that process would default to the same CPU that the first worker was bound to.

The reason for this is due to how Linux handles CPU affinity of child processes. Unless told otherwise, a child process is always bound to the same CPU as the parent process in Linux. The reason for this is to allow processes to leverage the L1 and L2 caches available on the physical CPU. In most cases, this makes an application perform faster.

The downside to this can be seen in our example. If we enable two workers and both worker1 and worker2 were bound to CPU 0, the workers would constantly be competing for the same CPU time. By pinning the worker processes on different CPUs, we are able to better utilize all of our CPU time available to our system and reduce the amount of times our worker processes are waiting for CPU time.

In the configuration above, we are using cpu-map to define CPU affinity by pinning worker1 to CPU 0 and worker2 to CPU 1.

After making these changes, we can restart the HAProxy instance again and retest with the ab tool to see some significant improvements in performance.

# systemctl restart haproxy

With HAProxy restarted, let’s go ahead and rerun our test with the ab command.

# ab -c 2500 -n 5000 -s 90
Requests per second:    1185.97 [#/sec] (mean)
Time per request:       2302.093 [ms] (mean)
Time per request:       0.921 [ms] (mean, across all concurrent requests)

In our previous test run, we were able to get a Requests per second of 749.22. With this latest run, after increasing the number of worker processes, we were able to push the Requests per second to 1185.97, a sizable improvement.

Adjusting the Load Balancing Algorithm

The final adjustment we will make is not a traditional tuning parameter, but it still has an importance in the amount of HTTP sessions our HAProxy instance can process. The adjustment is the load balancing algorithm we have specified.

Earlier in this post, we specified the load balancing algorithm of roundrobin in our backend service. In this next step, we will be changing the balance parameter to static-rr by once again editing the /etc/haproxy/haproxy.cfg file.

    mode    http
    balance static-rr
    server  nyc2 check
    server  sfo1 check

The static-rr algorithm is a round robin algorithm very similar to the roundrobin algorithm, with the exception that it does not support dynamic weighting. This weighting mechanism allows HAProxy to select a preferred backend over others. Since static-rr doesn’t worry about dynamic weighting, it is slightly more efficient than the roundrobin algorithm (approximately 1 percent more efficient).

Let’s go ahead and test the impact of this change by restarting the HAProxy instance again and executing another ab test run.

# systemctl restart haproxy

With the service restarted, let’s go ahead and rerun our test.

# ab -c 2500 -n 5000 -s 90
Requests per second:    1460.29 [#/sec] (mean)
Time per request:       1711.993 [ms] (mean)
Time per request:       0.685 [ms] (mean, across all concurrent requests)

In this final test, we were able to increase our Requests per second metric to 1460.29, a sizable difference over the 1185.97 results from the previous run.


In the beginning of this article, our basic HAProxy instance was only able to service 97 HTTP requests per second. After increasing a maximum number of connections parameter, increasing the number of worker processes, and changing our load balancing algorithm, we were able to push our HAProxy instance to 1460 HTTP requests per second; an improvement of 1405 percent.

Even with such an increase in performance, there are still more tuning parameters available within HAProxy. While this article covered a few basic and unconventional parameters, we have still only scratched the surface of tuning HAProxy. For more tuning options, you can checkout HAProxy’s configuration guide.

“Performance Tuning HAProxy” via @madflojo
Click To Tweet

The post Performance Tuning HAProxy appeared first on via @codeship.

Read the whole story
491 days ago
Share this story


1 Share
A73eafb2 0693 4035 94ac 5f534ead018b

部分内容偏主观且含有剧透.同时文章内容不涉及《Persona Q》和《圣洁之魂》



《真女神转生》的主要外传系列,日版都会用“女神异闻录”作为副标题。美版则会无责任加上“Shin Megami Tensei”这几个字,导致美国玩家自发地创造出了“女神多元宇宙”的理论体系。讲得一板一眼,细分析还真是那么个意思。

691c2252 bd3f 48f1 9da2 b6cedba7f30b watermark
E4694a31 cf4d 4508 bcd0 01db912695d9 watermark
B7c743be 4188 425d 8a5b 6e3042935940 watermark
3fe206e3 1293 4cc4 a464 6dc51c4d85b0 watermark
美版P3F的封面 依然有“真女神转生”的读音
Ae95af48 0adc 4e17 af8c 093024e48b48 watermark
谷歌搜“Shin Megami Tensei universe” 原图大小4500*3375


B61fa429 fcc2 4b51 bb9c fa75ecf258a1 watermark
PS版的《Persona2 罪》 并没有“女神异闻录”的副标题
2cd69ef4 b16b 451f bf7a 423f2401928b watermark
NDS版《恶魔幸存者2》 副标题是英文的片假名
9f46d597 2b56 406d a37c 06a3fdd35a3a watermark
8a6ce922 069b 4927 9577 d2b6d9f2ab14 watermark
PSP版“Persona” 似乎是刻意和本传做区分








65f21bb6 2d06 4716 afca cad5bb3aee96 watermark
马蒂斯解释Abyss of time的形成原因


《Persona》系列除受到心理学影响外,还受到《JOJO》巨大的影响,主创之一 金子一马就是《JOJO》的粉丝。游戏在设计初期,就有做成一个“使用替身攻击的RPG”的打算,就连通过塔罗牌进行分类的方法,也是借用JOJO第三部的设定。


9cbc1b60 3b72 4884 a26c be19e958e05f watermark





062e7fb6 555d 4095 84e6 1e6316ec88a3 watermark
02171cef 7012 45f3 81e1 28f30baa3482 watermark
F76ade1d 91a1 4942 a112 ae0eba90d95a watermark

跳出游戏中的概念,Persona的来自希腊语,代表“舞台上演员所佩戴的面具”, 有别于现在的“人格(Personality)”。“Persona”是为了适应社会生活,依附在自我之外的表象,是社会生活的产物。受到不同外界力量的影响,会产生不同的“Persona”。



D7bb72dc fa75 4c43 b2e5 ba630c4f9451 watermark

不同于之后的作品,P1和P2中主角团之所以能自由切换“Persona”,是因为都曾参加过名叫“Persona Game”的召唤仪式。通过仪式,费列蒙在确定主角团有控制住“Persona”的能力后,赋予主角团召唤“Persona”的资格,且全员都可以进入天鹅绒房间。











D3848d31 37ce 4aa2 8db5 67d8caf69c87 watermark



87b9cdaf 7187 4799 8784 9f6a95ac0cff watermark









133552b2 5b6d 4c75 a552 2b1e3eed61bf watermark
Philemon(费列蒙) 人类理智潜意识的集合体
7cc9d77e c50b 42df 8a8a c348cdf0c001 watermark
A490b1d5 ca11 418e 9a50 76895e911c72 watermark

而天鹅绒房间的主人伊戈尔(Igor)同样是费列蒙的创造物,伊戈尔名字灵感来源自玛丽雪莱的著名小说科学怪人改编的电影《Son of Frankenstein》(1939)。



4bc5ef54 0457 43b8 a656 87c33f331b37 watermark
Cb4f6411 dd98 4b00 b8f8 7047eafebaee watermark
3db9584f 0857 424f 86b0 453fcfcc3428 watermark
P3F The Answer中艾吉斯进入天鹅绒房间
99b80397 2b37 4942 bb5f 0efbce471632 watermark

值得一提的是,伊戈尔的声优田の中 勇于2010年去世。但在后续的作品中,依然使用田の中 勇旧的声音素材。新台词没有声音。真不是闹鬼。



A162cdd4 2f41 4785 aea5 832b8516d442 watermark
贝拉冬娜 进入房间后的女声就是她声音
073cd3d8 619f 49e7 8b0e 348f8e42973f watermark
无名氏 在天鹅绒房间里度过了900155天


F4cdf345 bb32 4192 95d1 a9b3f73b6e05 watermark


D2ae11a3 0a1d 4f7f 839a a8b64553b5d2 watermark


98626cdf 4052 4361 b0eb d9f20fbbced5 watermark
玛丽格特、伊丽莎白、迪奥多 实际上是姐弟。玛丽格特最大,迪奥多最小。


93b39d3b c531 4d16 8d02 1d47dac02e45 watermark
P系列有很多致敬《JOJO》的地方 这个姿势据说是模仿 约瑟夫 乔斯达


220a5cec 5a60 4f21 9e8b d71a997dd966 watermark
0f1f17a4 ef66 4c3c 9228 f164d56c0f85 watermark

以上助手形态各异,并且除了贝拉冬娜和无名氏,其他几位的名字同样都来源于玛丽雪莱的小说《科学怪人》。而伊丽莎白和Lavenza实际上是将小说里的人物Elizabeth Lavenza拆开使用。其他人都能找到对应的角色。有兴趣的玩家可以研究一下。



剧本 里见直、制作人/监督 冈田耕始、美术 金子一马。

84f3ea4a 734c 4adf a688 ba29d5664a57 watermark
从左至右分别是 里见直、冈田耕始、金子一马

里见直,1970年生人,已离职。担任过P1、P2罪、P2罚的剧本创作,P1是其第一作,最后负责过的游戏是PS2平台上的《数码恶魔2(Digital Devil Saga 2)》。据报道“Persona”的世界观由里见、冈田、金子三人合力完成,其中大部分基础设定由里见创作,并且将荣格心理学与克苏鲁神话相融合,创作出了游戏中原创的世界观。

C68a67ef 2e0e 4d57 98d7 75ee4e8e9de0 watermark
神取鹰久 Persona是克苏鲁神话中的“奈亚拉托提普”

金子一马,1964年生人,没读过大学,据传画画全靠自学。1988年加入Altus,现成为Altus管理层,最近参加了《真女神转生4》《真女神转生4 Final》的设计工作。


71e8d20a 9e1f 4be8 8eeb 1fe360abe1bf watermark
Gaia公司开发的极少数游戏之一 PSP《怪兽王国 晶石召唤师》
379017d3 3f58 4fe7 b985 c258eeb65666 watermark
另一款Gaia开发的游戏 PSP《密码之魂 伊迪亚的传承》


52174365 18b0 4728 b6b3 2a5fb1eac69c watermark
内田たまき《真女生转生 if》的女性主角 在P2罪和罚中都有出现

同时P1和P2的故事背景虽然不同,但却用P1中的角色将两部游戏串联起来。P2 罚的结局中P1主角团更是齐聚一堂,迎接P1主角回归。这三人操刀的P1和P2,是P系列中联系最紧密的两作。

9e869ecd 3212 441f 8e26 a2894f39089e watermark
9a61031e 8d66 46fd a1db 1ce966941e9a watermark
黛 ゆきの P2罪中参战伙伴,在罚中也有露脸。铠甲式的外套设计很帅





这之中必须重点提下桥野桂,他在担任监督之前曾经在Altus做过两款游戏。一个是NDS上的《超执刀》,另一个是PS2的《真女生转生3》。前者是款好游戏,但和P系列没有任何关系。后者却 奠定现在《Persona》系列的战斗系统

1c32243e 6596 402d 8ca7 c672847d00b6 watermark

在《真女神转生3》中,我方角色发动暴击或者使用弱点属性攻击,都能额外获得一次进攻机会。该系统的引入增强了战斗的 在《真女生转生3》中,我方角色发动暴击或者使用弱点属性攻击,都能额外获得一次进攻机会。该系统的引入增强了战斗的 策略性。并且在此基础上,制作团队吸收P2经验,创造出“群殴”系统,提高了战斗的速度。这两个系统在P3之后成为历代战斗系统的核心。






在《心跳回忆3》神条芹華线的魔物决战事件中,背景音乐出现过和天鹅绒房间主题曲极为类似BGM。《心跳回忆3》是在P2 罚之后发售,足见《Persona》系列在当时已经拥有足够的影响力。

15723085 f9ba 47ef 88d5 6363d76c5183 watermark


7877f2c9 0f57 47a6 8bde d87d6f296091 watermark










42b4699b e51f 4c1c a41c b5659e4b6ec0 watermark






49403717 0a46 48c1 ae3e 81acb2a20ab9 watermark
在后续的作品都有作为彩蛋出场 P5中也有提及



B90d7776 5aaa 44b8 9bb6 3d2b6fdba600 watermark


89a71b7b 4951 4bf3 ad1c 64d0209c5bb2 watermark


8e603d2f 4130 4eaf a693 b23d009b7a79 watermark



06ef7b1c c718 4387 9e19 561b17b8d06b watermark
644de46c efe9 4ead 96ff 600997306ced watermark






91012bc1 d6af 496d a540 ffc98cace2f4 watermark







81093d0c 6348 4d16 aba5 75f2f27d7de8 watermark



P3F中, 游戏在传统塔罗牌体系之外,首次创造出了“永劫”这张牌。



20ddc133 a619 4747 a0ac 3a817877c8e1 watermark

“永劫”在英文版中被翻译成了“Aeon”,英文释义是“extremely long period of time”。和“永远”的含义接近。





937e0f58 846e 4879 a46e 07e550454552 watermark
F102efe0 d767 4dd8 8346 320e676be793 watermark





88f06eb5 a07c 4db9 b03a 022571cc59cc watermark


C3d92dd9 17b5 4ffa bfe1 0adaf8befb1f watermark
Read the whole story
500 days ago
Share this story
Next Page of Stories