Veni, vidi, vici.
26 stories
·
1 follower

Mad Marx: The Class Warrior

3 Comments and 14 Shares












Read the whole story
yee
83 days ago
reply
39.965424,116.324526
popular
87 days ago
reply
Share this story
Delete
3 public comments
rraszews
87 days ago
reply
Karl Marx of the Wasteland headshotting Ayn Rand is the single most beautiful thought I have ever been gifted with.
CarlEdman
87 days ago
reply
So true! All of the world's problems could be solved by Marx(ists) killing more of their opponents.
Falls Church, Virginia, USA
quad
87 days ago
Your irony game is so strong.
rclatterbuck
87 days ago
reply
I'd watch it

“MP3 is dead” missed the real, much better story

1 Comment and 3 Shares

If you read the news, you may think the MP3 file format was recently officially “killed” somehow, and any remaining MP3 holdouts should all move to AAC now. These are all simple rewrites of Fraunhofer IIS’ announcement that they’re terminating the MP3 patent-licensing program.

Very few people got it right. The others missed what happened last month:

If the longest-running patent mentioned in the aforementioned references is taken as a measure, then the MP3 technology became patent-free in the United States on 16 April 2017 when U.S. Patent 6,009,399, held by and administered by Technicolor, expired.

MP3 is no less alive now than it was last month or will be next year — the last known MP3 patents have simply expired.1

So while there’s a debate to be had — in a moment — about whether MP3 should still be used today, Fraunhofer’s announcement has nothing to do with that, and is simply the ending of its patent-licensing program (because the patents have all expired) and a suggestion that we move to a newer, still-patented format.

Why still use MP3 when newer, better formats exist?

MP3 is very old, but it’s the same age as JPEG, which has also long since been surpassed in quality by newer formats. JPEG is still ubiquitous not because Engadget forgot to declare its death, but because it’s good enough and supported everywhere, making it the most pragmatic choice most of the time.2

AAC and other newer audio codecs can produce better quality than MP3, but the difference is only significant at low bitrates. At about 128 kbps or greater, the differences between MP3 and other codecs are very unlikely to be noticed, so it isn’t meaningfully better for personal music collections. For new music, get AAC if you want, but it’s not worth spending any time replacing MP3s you already have.

AAC makes a lot of sense for low- and medium-quality applications where bandwidth is extremely limited or expensive, like phone calls and music-streaming services, or as sound for video, for which it’s the most widely supported format.

It may seem to make sense for podcasts, but it doesn’t. Podcasters need to distribute a single file type that’s playable on the most players and devices possible, and though AAC is widely supported today, it’s still not as widely supported as MP3. So podcasters overwhelmingly choose MP3: among the 50 million podcast episodes in Overcast’s database, 92% are MP3, and within the most popular 500 podcasts, 99% are MP3.

And AAC is also still patent-encumbered, which prevents innovation, hinders support, restricts potential uses, and imposes burdensome taxes on anything that goes near it.

So while AAC does offer some benefits, it also brings additional downsides and costs, and the benefits aren’t necessary or noticeable in some major common uses. Even the file-size argument for lower bitrates is less important than ever in a world of ever-increasing bandwidth and ever-higher relative uses of it.3

Ogg Vorbis and Opus offer similar quality advantages as AAC with (probably) no patent issues, which was necessary to provide audio options to free, open-source software and other contexts that aren’t compatible with patent licensing. But they’re not widely supported, limiting their useful applications.

Until a few weeks ago, there had never been an audio format that was small enough to be practical, widely supported, and had no patent restrictions, forcing difficult choices and needless friction upon the computing world. Now, at least for audio, that friction has officially ended. There’s finally a great choice without asterisks.

MP3 is supported by everything, everywhere, and is now patent-free. There has never been another audio format as widely supported as MP3, it’s good enough for almost anything, and now, over twenty years since it took the world by storm, it’s finally free.


  1. There’s some debate whether expirations of two remaining patents have happened yet. I’m not a patent lawyer, but the absolute latest interpretation would have the last one expire soon, on December 30, 2017. ↩︎

  2. For photos and other image types poorly suited to PNG, of course. ↩︎

  3. Suppose a podcast debates switching from 64 kbps MP3 to 48 kbps AAC. That would only save about 7 MB per hour of content, which isn’t a meaningful amount of data for most people anymore (especially for podcasts, which are typically background-downloaded on Wi-Fi). Read the Engadget and Gizmodo articles, at 3.6 and 5.2 MB, respectively, and you’ve already spent more than that difference. Watch a 5-minute YouTube video at default quality, and you’ll blow through about three times as much. ↩︎

Read the whole story
yee
93 days ago
reply
39.965424,116.324526
Share this story
Delete
1 public comment
mosheb007
90 days ago
reply
MP3 is patent-free. Not dead. Still suitable for rates over 128

So you want to expose Go on the Internet

1 Comment

Back when crypto/tls was slow and net/http young, the general wisdom was to always put Go servers behind a reverse proxy like NGINX. That’s not necessary anymore!

At Cloudflare we recently experimented with exposing pure Go services to the hostile wide area network. With the Go 1.8 release, net/http and crypto/tls proved to be stable, performant and flexible.

However, the defaults are tuned for local services. In this articles we’ll see how to tune and harden a Go server for Internet exposure.

crypto/tls

You’re not running an insecure HTTP server on the Internet in 2016. So you need crypto/tls. The good news is that it’s now really fast (as you’ve seen in a previous article on this blog), and its security track record so far is excellent.

The default settings resemble the Intermediate recommended configuration of the Mozilla guidelines. However, you should still set PreferServerCipherSuites to ensure safer and faster cipher suites are preferred, and CurvePreferences to avoid unoptimized curves: a client using CurveP384 would cause up to a second of CPU to be consumed on our machines.

&tls.Config{
	// Causes servers to use Go's default ciphersuite preferences,
	// which are tuned to avoid attacks. Does nothing on clients.
	PreferServerCipherSuites: true,
	// Only use curves which have assembly implementations
	CurvePreferences: []tls.CurveID{
		tls.CurveP256,
		tls.X25519, // Go 1.8 only
	},
}

If you can take the compatibility loss of the Modern configuration, you should then also set MinVersion and CipherSuites.

	MinVersion: tls.VersionTLS12,
	CipherSuites: []uint16{
		tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
		tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
		tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, // Go 1.8 only
		tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,   // Go 1.8 only
		tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
		tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,

		// Best disabled, as they don't provide Forward Secrecy,
		// but might be necessary for some clients
		// tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
		// tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
	},

Be aware that the Go implementation of the CBC cipher suites (the ones we disabled in Modern mode above) is vulnerable to the Lucky13 attack, even if partial countermeasures were merged in 1.8.

Final caveat, all these recommendations apply only to the amd64 architecture, for which fast, constant time implementations of the crypto primitives (AES-GCM, ChaCha20-Poly1305, P256) are available. Other architectures are probably not fit for production use.

Since this server will be exposed to the Internet, it will need a publicly trusted certificate. You can get one easily and for free thanks to Let’s Encrypt and the golang.org/x/crypto/acme/autocert package’s GetCertificate function.

Don’t forget to redirect HTTP page loads to HTTPS, and consider HSTS if your clients are browsers.

srv := &http.Server{
	ReadTimeout:  5 * time.Second,
	WriteTimeout: 5 * time.Second,
	Handler: http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
		w.Header().Set("Connection", "close")
		url := "https://" + req.Host + req.URL.String()
		http.Redirect(w, req, url, http.StatusMovedPermanently)
	}),
}
go func() { log.Fatal(srv.ListenAndServe()) }()

You can use the SSL Labs test to check that everything is configured correctly.

net/http

net/http is a mature HTTP/1.1 and HTTP/2 stack. You probably know how (and have opinions about how) to use the Handler side of it, so that’s not what we’ll talk about. We will instead talk about the Server side and what goes on behind the scenes.

Timeouts

Timeouts are possibly the most dangerous edge case to overlook. Your service might get away with it on a controlled network, but it will not survive on the open Internet, especially (but not only) if maliciously attacked.

Applying timeouts is a matter of resource control. Even if goroutines are cheap, file descriptors are always limited. A connection that is stuck, not making progress or is maliciously stalling should not be allowed to consume them.

A server that ran out of file descriptors will fail to accept new connections with errors like

http: Accept error: accept tcp [::]:80: accept: too many open files; retrying in 1s

A zero/default http.Server, like the one used by the package-level helpers http.ListenAndServe and http.ListenAndServeTLS, comes with no timeouts. You don’t want that.

HTTP server phases

There are three main timeouts exposed in http.Server: ReadTimeout, WriteTimeout and IdleTimeout. You set them by explicitly using a Server:

srv := &http.Server{
    ReadTimeout:  5 * time.Second,
    WriteTimeout: 10 * time.Second,
    IdleTimeout:  120 * time.Second,
    TLSConfig:    tlsConfig,
    Handler:      serveMux,
}
log.Println(srv.ListenAndServeTLS("", ""))

ReadTimeout covers the time from when the connection is accepted to when the request body is fully read (if you do read the body, otherwise to the end of the headers). It’s implemented in net/http by calling SetReadDeadline immediately after Accept.

The problem with a ReadTimeout is that it doesn’t allow a server to give the client more time to stream the body of a request based on the path or the content. Go 1.8 introduces ReadHeaderTimeout, which only covers up to the request headers. However, there’s still no clear way to do reads with timeouts from a Handler. Different designs are being discussed in issue #16100.

WriteTimeout normally covers the time from the end of the request header read to the end of the response write (a.k.a. the lifetime of the ServeHTTP), by calling SetWriteDeadline at the end of readRequest.

However, when the connection is over HTTPS, SetWriteDeadline is called immediately after Accept so that it also covers the packets written as part of the TLS handshake. Annoyingly, this means that (in that case only) WriteTimeout ends up including the header read and the first byte wait.

Similarly to ReadTimeout, WriteTimeout is absolute, with no way to manipulate it from a Handler (#16100).

Finally, Go 1.8 introduces IdleTimeout which limits server-side the amount of time a Keep-Alive connection will be kept idle before being reused. Before Go 1.8, the ReadTimeout would start ticking again immediately after a request completed, making it very hostile to Keep-Alive connections: the idle time would consume time the client should have been allowed to send the request, causing unexpected timeouts also for fast clients.

You should set Read, Write and Idle timeouts when dealing with untrusted clients and/or networks, so that a client can’t hold up a connection by being slow to write or read.

For detailed background on HTTP/1.1 timeouts (up to Go 1.7) read my post on the Cloudflare blog.

HTTP/2

HTTP/2 is enabled automatically on any Go 1.6+ server if:

  • the request is served over TLS/HTTPS
  • Server.TLSNextProto is nil (setting it to an empty map is how you disable HTTP/2)
  • Server.TLSConfig is set and ListenAndServeTLS is used or
  • Serve is used and tls.Config.NextProtos includes "h2" (like []string{"h2", "http/1.1"}, since Serve is called too late to auto-modify the TLS Config)

HTTP/2 has a slightly different meaning since the same connection can be serving different requests at the same time, however, they are abstracted to the same set of Server timeouts in Go.

Sadly, ReadTimeout breaks HTTP/2 connections in Go 1.7. Instead of being reset for each request it’s set once at the beginning of the connection and never reset, breaking all HTTP/2 connections after the ReadTimeout duration. It’s fixed in 1.8.

Between this and the inclusion of idle time in ReadTimeout, my recommendation is to upgrade to 1.8 as soon as possible.

TCP Keep-Alives

If you use ListenAndServe (as opposed to passing a net.Listener to Serve, which offers zero protection by default) a TCP Keep-Alive period of three minutes will be set automatically. That will help with clients that disappear completely off the face of the earth leaving a connection open forever, but I’ve learned not to trust that, and to set timeouts anyway.

To begin with, three minutes might be too high, which you can solve by implementing your own tcpKeepAliveListener.

More importantly, a Keep-Alive only makes sure that the client is still responding, but does not place an upper limit on how long the connection can be held. A single malicious client can just open as many connections as your server has file descriptors, hold them half-way through the headers, respond to the rare keep-alives, and effectively take down your service.

Finally, in my experience connections tend to leak anyway until timeouts are in place.

ServeMux

Package level functions like http.Handle[Func] (and maybe your web framework) register handlers on the global http.DefaultServeMux which is used if Server.Handler is nil. You should avoid that.

Any package you import, directly or through other dependencies, has access to http.DefaultServeMux and might register routes you don’t expect.

For example, if any package somewhere in the tree imports net/http/pprof clients will be able to get CPU profiles for your application. You can still use net/http/pprof by registering its handlers manually.

Instead, instantiate an http.ServeMux yourself, register handlers on it, and set it as Server.Handler. Or set whatever your web framework exposes as Server.Handler.

Logging

net/http does a number of things before yielding control to your handlers: Accepts the connections, runs the TLS Handshake, …

If any of these go wrong a line is written directly to Server.ErrorLog. Some of these, like timeouts and connection resets, are expected on the open Internet. It’s not clean, but you can intercept most of those and turn them into metrics by matching them with regexes from the Logger Writer, thanks to this guarantee:

Each logging operation makes a single call to the Writer’s Write method.

To abort from inside a Handler without logging a stack trace you can either panic(nil) or in Go 1.8 panic(http.ErrAbortHandler).

Metrics

A metric you’ll want to monitor is the number of open file descriptors. Prometheus does that by using the proc filesystem.

If you need to investigate a leak, you can use the Server.ConnState hook to get more detailed metrics of what stage the connections are in. However, note that there is no way to keep a correct count of StateActive connections without keeping state, so you’ll need to maintain a map[net.Conn]ConnState.

Conclusion

The days of needing NGINX in front of all Go services are gone, but you still need to take a few precautions on the open Internet, and probably want to upgrade to the shiny, new Go 1.8.

Happy serving!

Read the whole story
yee
240 days ago
reply
Package level functions like http.Handle[Func] (and maybe your web framework) register handlers on the global http.DefaultServeMux which is used if Server.Handler is nil. You should avoid that.

Any package you import, directly or through other dependencies, has access to http.DefaultServeMux and might register routes you don’t expect.

For example, if any package somewhere in the tree imports net/http/pprof clients will be able to get CPU profiles for your application. You can still use net/http/pprof by registering its handlers manually.

Instead, instantiate an http.ServeMux yourself, register handlers on it, and set it as Server.Handler. Or set whatever your web framework exposes as Server.Handler.
39.965424,116.324526
Share this story
Delete

Collecting and Reading with DEVONthink

1 Comment

I read a lot. Until recently that included Twitter. To replace that source of information, I've shifted to other, more curated, sources like my RSS reader.

I'm a big fanatic for Pinboard. It's almost completely plain, minimally styled, text on a blank page. I capture every web link that I find interesting right into Pinboard. I also use it as a reading list. It's great.1

I was asked on Twitter about using DEVONthink (DT) as an RSS reader or Pinboard alternative.2 Before Pinboard I used DEVONthink and it was great for archiving web content. I even used DT as an RSS reader for a long time after I gave up on NetNewsWire. I don't think I'd be able to switch my entire bookmarking activity to DEVONthink simply because I'm on Windows during the day and have no access to DEVONthink there. If I were Mac and iOS only, this could be a great system.

Adding an RSS feed to DEVONthink can only be done on the Mac right now. The same goes with refreshing the feed content. This is a major barrier for some folks that are iOS only. Hopefully this changes soon.

On the Mac, just add a new RSS type document and provide the feed address.

Adding a Feed

DEVONthink will download each article and display them in a nicely formatted view. Even better, if I mark the feed document with a "read_later” tag, each article will also get the same tag. While I'm reading on the Mac, I can archive the document for off-line storage and search.

Capture in the Mac App

If I'm in Safari, I can use the web clipper to capture a page in several different formats and apply tags along the way too.

Mac web clipper

When I was using DEVONthink as a feed reader and article filing system my favorite thing was the intelligence built into the app. Not only was searching very easy but it was more powerful. The boolean search operators were far more effective than anything I can now do on Pinboard. The "See Also" system in DT is unlike anything I've ever used. This was particularly useful when I had a large collection of web pages in my archive. If one page wasn't exactly what I wanted, the "See Also" score often pointed me to the right one.

See Also

Syncing from Mac to iOS works well in DEVONthink. The read status is updated and I can create archives of pages. As mentioned above, DEVONthink on iOS can not update an RSS feed document on its own. It can only sync the articles from the Mac. This is probably a bigger barrier for me than even the lack of access on Windows. I'm often on my phone and want feeds to update wherever I am.

RSS List on iOS

The article view on iOS is satisfactory. It's not as nice as a dedicated reader but it works. The real benefit is in the built in capture tools in DEVONthink To Go. While reading an article I can quickly capture it for off-line archiving right within the app. Not only are there several options but I can also edit the meta data along the way which benefits future searching.

Article Capture on iOS

Unfortunately, I don't think DEVONthink for iOS is ready to be my RSS reader. It's still a terrific place to archive pages and bookmarks though. The search operators still work in iOS and the extension is a very convenient way to capture a lot of different content. Web pages can be captured as archives or as Markdown pages with reference links back to the original source.

Clipping on iOS

After a bit of bending over backwards to make this work, I don't think I can use DEVONthink for a complete bookmarking or feed reading solution. I think it's great for selective archiving of web content that I'm confident I'll want to look up later. I use the Mac and iOS applications quite a bit when I'm researching how to build a fence or what the best new T.V. is, but I don't think I'd want to archive every web page I read in DEVONthink. I'm a bit on the fence here, though. With a few more features in the iOS app I I'd be all in with DEVONthink for bookmarking. Grouping web archives with bookmarks and text notes with a inside a folder structure is a great argument in favor of DEVONthink. I'll keep using Pinboard for bookmarks, Feedbin for RSS reading and DEVONthink for dedicated research work. We'll see what 2017 brings. For now, I enjoyed the distraction of experimentation with a long-unchanged system.

DEVONthink To Go for iOS | $15

DEVONthink Pro Office for Mac | $150


  1. I do miss Twitter for things like this but it's unhealthy for me and I question it's value as part of my life. Don't "@" me there. Email works. 

Read the whole story
yee
275 days ago
reply
macOS第一神器!
39.965424,116.324526
Share this story
Delete

Performance Tuning HAProxy

1 Share

In a recent article, I covered how to tune the NGINX webserver for a simple static HTML page. In this article, we are going to once again explore those performance-tuning concepts and walk through some basic tuning options for HAProxy.


“Let’s install and configure HAProxy to act as a load balancer for two NGINX instances.”
Click To Tweet


What is HAProxy

HAProxy is a software load balancer commonly used to distribute TCP-based traffic to multiple backend systems. It provides not only load balancing but also has the ability to detect unresponsive backend systems and reroute incoming traffic.

In a traditional IT infrastructure, load balancing is often performed by expensive hardware devices. In cloud and highly distributed infrastructure environments, there is a need to provide this same type of service while maintaining the elastic nature of cloud infrastructure. This is the type of environment where HAProxy shines, and it does so while maintaining a reputation for being extremely efficient out of the box.

Much like NGINX, HAProxy has quite a few parameters set for optimal performance out of the box. However, as with most things, we can still tune it for our specific environment to increase performance.

In this article, we are going to install and configure HAProxy to act as a load balancer for two NGINX instances serving a basic static HTML site. Once set up, we are going to take that configuration and tune it to gain even more performance out of HAProxy.


“HAProxy has quite a few parameters set for optimal performance out of the box.” via @madflojo
Click To Tweet


Installing HAProxy

For our purposes, we will be installing HAProxy on an Ubuntu system. The installation of HAProxy is fairly simple on an Ubuntu system. To accomplish this, we will use the Apt package manager; specifically we will be using the apt-get command.

# apt-get install haproxy
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  liblua5.3-0
Suggested packages:
  vim-haproxy haproxy-doc
The following NEW packages will be installed:
  haproxy liblua5.3-0
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 872 kB of archives.
After this operation, 1,997 kB of additional disk space will be used.
Do you want to continue? [Y/n] y

With the above complete, we now have HAProxy installed. The next step is to configure it to load balance across our backend NGINX instances.

Basic HAProxy Config

In order to set up HAProxy to load balance HTTP traffic across two backend systems, we will first need to modify HAProxy’s default configuration file /etc/haproxy/haproxy.cfg.

To get started, we will be setting up a basic frontend service within HAProxy. We will do this by appending the below configuration block.

frontend www
    bind               :80
    mode               http
    default_backend    bencane.com

Before going too far, let’s break down this configuration a bit to understand what exactly we are telling HAProxy to do.

In this section, we are defining a frontend service for HAProxy. This is essentially a frontend listener that will accept incoming traffic. The first parameter we define within this section is the bind parameter. This parameter is used to tell HAProxy what IP and Port to listen on; 0.0.0.0:80 in this case. This means our HAProxy instance will listen for traffic on port 80 and route it through this frontend service named www.

Within this section, we are also defining the type of traffic with the mode parameter. This parameter accepts tcp or http options. Since we will be load balancing HTTP traffic, we will use the http value. The last parameter we are defining is default_backend, which is used to define the backend service HAProxy should load balance to. In this case, we will use a value of bencane.com which will route traffic through our NGINX instances.

backend bencane.com
    mode     http
    balance  roundrobin
    server   nyc2 nyc2.bencane.com:80 check
    server   sfo1 sfo1.bencane.com:80 check

Like the frontend service, we will also need to define our backend service by appending the above configuration block to the same /etc/haproxy/haproxy.cfg file.

In this backend configuration block, we are defining the systems that HAProxy will load balance traffic to. Like the frontend section, this section also contains a mode parameter to define whether these are tcp or http backends. For this example, we will once again use http as our backend systems are a set of NGINX webservers.

In addition to the mode parameter, this section also has a parameter called balance. The balance parameter is used to define the load-balancing algorithm that determines which backend node each request should be sent to. For this initial step, we can simply set this value to roundrobin, which is used to send traffic evenly as it comes in. This setting is pretty common and often the first load balancer that users start with.

The final parameter in the backend service is server, which is used to define the backend system to balance to. In our example, there are two lines that each define a different server. These two servers are the NGINX webservers that we will load balancing traffic to in this example.

The format of the server line is a bit different than the other parameters. This is because node-specific settings can be configured via the server parameter. In the example above, we are defining a label, IP:Port, and whether or not a health check should be used to monitor the backend node.

By specifying check after the web-server’s address, we are defining that HAProxy should perform a health check to determine whether the backend system is responsive or not. If the backend system is not responsive, incoming traffic will not be routed to that backend system.

With the changes above, we now have a basic HAProxy instance configured to load balance an HTTP service. In order for these configurations to take effect however, we will need to restart the HAProxy instance. We can do that with the systemctl command.

# systemctl restart haproxy

Now that our configuration changes are in place, let’s go ahead and get started with establishing our baseline performance of HAProxy.

Baselining Our Performance

In the “Tuning NGINX for Performance” article, I discussed the importance of establishing a performance baseline before making any changes. By establishing a baseline performance before making any changes, we can identify whether or not the changes we make have a beneficial effect.

As in the previous article, we will be using the ApacheBench tool to measure the performance of our HAProxy instance. In this example however, we will be using the flag -c to change the number of concurrent HTTP sessions and the flag -n to specify the number of HTTP requests to make.

# ab -c 2500 -n 5000 -s 90 http://104.131.125.168/
Requests per second:    97.47 [#/sec] (mean)
Time per request:       25649.424 [ms] (mean)
Time per request:       10.260 [ms] (mean, across all concurrent requests)

After running the ab (ApacheBench) tool, we can see that out of the box our HAProxy instance is servicing 97.47 HTTP requests per second. This metric will be our baseline measurement; we will be measuring any changes against this metric.

Setting the Maximum Number of Connections

One of the most common tunable parameters for HAProxy is the maxconn setting. This parameter defines the maximum number of connections the entire HAProxy instance will accept.

When calling the ab command above, I used the -c flag to tell ab to open 2500 concurrent HTTP sessions. By default, the maxconn parameter is set to 2000. This means that a default instance of HAProxy will start queuing HTTP sessions once it hits 2000 concurrent sessions. Since our test is launching 2500 sessions, this means that at any given time at least 500 HTTP sessions are being queued while 2000 are being serviced immediately. This certainly should have an effect on our throughput for HAProxy.

Let’s go ahead and raise this limit by once again editing the /etc/haproxy/haproxy.cfg file.

global
        maxconn         5000

Within the haproxy.cfg file, there is a global section; this section is used to modify “global” parameters for the entire HAProxy instance. By adding the maxconn setting above, we are increasing the maximum number of connections for the entire HAProxy instance to 5000, which should be plenty for our testing. In order for this change to take effect, we must once again restart the HAProxy instance using the systemctl command.

 # systemctl restart haproxy 

With HAProxy restarted, let’s run our test again.

# ab -c 2500 -n 5000 -s 90 http://104.131.125.168/
Requests per second:    749.22 [#/sec] (mean)
Time per request:       3336.786 [ms] (mean)
Time per request:       1.335 [ms] (mean, across all concurrent requests)

In our baseline test, the Requests per second value was 97.47. After adjusting the maxconn parameter, the same test returned a Requests per second of 749.22. This is a huge improvement over our baseline test and just goes to show how important of a parameter the maxconn setting is.

When tuning HAProxy, it is very important to understand your target number of concurrent sessions per instance. By identifying and tuning this value upfront, you can save yourself a lot of troubleshooting with HAProxy performance during peak traffic load.

In this article, we set the maxconn value to 5000; however this is still a fairly low number for a high-traffic environment. As such, I would highly recommend identifying your desired number of concurrent sessions and tuning the maxconn parameter before changing any other parameter when tuning HAProxy.

Multiprocessing and CPU Pinning

Another interesting tunable for HAProxy is the nbproc parameter. By default, HAProxy has a single worker process, which means that all of our HTTP sessions will be load balanced by a single process. With the nbproc parameter, it is possible to create multiple worker processes to help distribute the workload internally.

While additional worker processes might sound good at first, they only tend to provide value when the server itself has more than 1 CPU. It is not uncommon for environments that create multiple worker processes on single CPU systems to see that HAProxy performs worse than it did as a single process instance. The reason for this is because the overhead of managing multiple worker processes provides a diminishing return when the number of workers exceeds the number of CPUs available.

With this in mind, it is recommended that the nbproc parameter should be set to match the number of CPUs available to the system. In order to tune this parameter for our environment, we first need to check how many CPUs are available. We can do this by executing the lshw command.

# lshw -short -class cpu
H/W path      Device  Class      Description
============================================
/0/401                processor  Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz
/0/402                processor  Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz

From the output above, it appears that we have 2 available CPUs on our HAProxy server. Let’s go ahead and set the nbproc parameter to 2, which will tell HAProxy to start a second worker process on restart. We can do this by once again editing the global section of the /etc/haproxy/haproxy.cfg file.

global
        maxconn         5000
        nbproc          2
        cpu-map         1 0
        cpu-map         2 1

In the above HAProxy config example, I included another parameter named cpu-map. This parameter is used to pin a specific worker process to the specified CPU using CPU affinity. This allows the processes to better distribute the workload across multiple CPUs.

While this might not sound very critical at first, it is when you consider how Linux determines which CPU a process should use when it requires CPU time.

Understanding CPU Affinity

The Linux kernel internally has a concept called CPU affinity, which is where a process is pinned to a specific CPU for its CPU time. If we use our system above as an example, we have two CPUs (0 and 1), a single threaded HAProxy instance. Without any changes, our single worker process will be pinned to either 0 or 1.

If we were to enable a second worker process without specifying which CPU that process should have an affinity to, that process would default to the same CPU that the first worker was bound to.

The reason for this is due to how Linux handles CPU affinity of child processes. Unless told otherwise, a child process is always bound to the same CPU as the parent process in Linux. The reason for this is to allow processes to leverage the L1 and L2 caches available on the physical CPU. In most cases, this makes an application perform faster.

The downside to this can be seen in our example. If we enable two workers and both worker1 and worker2 were bound to CPU 0, the workers would constantly be competing for the same CPU time. By pinning the worker processes on different CPUs, we are able to better utilize all of our CPU time available to our system and reduce the amount of times our worker processes are waiting for CPU time.

In the configuration above, we are using cpu-map to define CPU affinity by pinning worker1 to CPU 0 and worker2 to CPU 1.

After making these changes, we can restart the HAProxy instance again and retest with the ab tool to see some significant improvements in performance.

# systemctl restart haproxy

With HAProxy restarted, let’s go ahead and rerun our test with the ab command.

# ab -c 2500 -n 5000 -s 90 http://104.131.125.168/
Requests per second:    1185.97 [#/sec] (mean)
Time per request:       2302.093 [ms] (mean)
Time per request:       0.921 [ms] (mean, across all concurrent requests)

In our previous test run, we were able to get a Requests per second of 749.22. With this latest run, after increasing the number of worker processes, we were able to push the Requests per second to 1185.97, a sizable improvement.

Adjusting the Load Balancing Algorithm

The final adjustment we will make is not a traditional tuning parameter, but it still has an importance in the amount of HTTP sessions our HAProxy instance can process. The adjustment is the load balancing algorithm we have specified.

Earlier in this post, we specified the load balancing algorithm of roundrobin in our backend service. In this next step, we will be changing the balance parameter to static-rr by once again editing the /etc/haproxy/haproxy.cfg file.

backend bencane.com
    mode    http
    balance static-rr
    server  nyc2 nyc2.bencane.com:80 check
    server  sfo1 sfo1.bencane.com:80 check

The static-rr algorithm is a round robin algorithm very similar to the roundrobin algorithm, with the exception that it does not support dynamic weighting. This weighting mechanism allows HAProxy to select a preferred backend over others. Since static-rr doesn’t worry about dynamic weighting, it is slightly more efficient than the roundrobin algorithm (approximately 1 percent more efficient).

Let’s go ahead and test the impact of this change by restarting the HAProxy instance again and executing another ab test run.

# systemctl restart haproxy

With the service restarted, let’s go ahead and rerun our test.

# ab -c 2500 -n 5000 -s 90 http://104.131.125.168/
Requests per second:    1460.29 [#/sec] (mean)
Time per request:       1711.993 [ms] (mean)
Time per request:       0.685 [ms] (mean, across all concurrent requests)

In this final test, we were able to increase our Requests per second metric to 1460.29, a sizable difference over the 1185.97 results from the previous run.

Summary

In the beginning of this article, our basic HAProxy instance was only able to service 97 HTTP requests per second. After increasing a maximum number of connections parameter, increasing the number of worker processes, and changing our load balancing algorithm, we were able to push our HAProxy instance to 1460 HTTP requests per second; an improvement of 1405 percent.

Even with such an increase in performance, there are still more tuning parameters available within HAProxy. While this article covered a few basic and unconventional parameters, we have still only scratched the surface of tuning HAProxy. For more tuning options, you can checkout HAProxy’s configuration guide.


“Performance Tuning HAProxy” via @madflojo
Click To Tweet


The post Performance Tuning HAProxy appeared first on via @codeship.

Read the whole story
yee
275 days ago
reply
39.965424,116.324526
Share this story
Delete

《Persona》系列梳理

1 Share
A73eafb2 0693 4035 94ac 5f534ead018b

部分内容偏主观且含有剧透.同时文章内容不涉及《Persona Q》和《圣洁之魂》

“女神异闻录”译名的由来

“女神异闻录”本身是“真女神转生的外传”的意思

《真女神转生》的主要外传系列,日版都会用“女神异闻录”作为副标题。美版则会无责任加上“Shin Megami Tensei”这几个字,导致美国玩家自发地创造出了“女神多元宇宙”的理论体系。讲得一板一眼,细分析还真是那么个意思。

691c2252 bd3f 48f1 9da2 b6cedba7f30b watermark
PS版的初代《Persona》
E4694a31 cf4d 4508 bcd0 01db912695d9 watermark
NDS上日版的《恶魔幸存者》
B7c743be 4188 425d 8a5b 6e3042935940 watermark
美版《恶魔幸存者》封面
3fe206e3 1293 4cc4 a464 6dc51c4d85b0 watermark
美版P3F的封面 依然有“真女神转生”的读音
Ae95af48 0adc 4e17 af8c 093024e48b48 watermark
谷歌搜“Shin Megami Tensei universe” 原图大小4500*3375

而在后续的作品都会将“女神异闻录”这一副标题去掉

B61fa429 fcc2 4b51 bb9c fa75ecf258a1 watermark
PS版的《Persona2 罪》 并没有“女神异闻录”的副标题
2cd69ef4 b16b 451f bf7a 423f2401928b watermark
NDS版《恶魔幸存者2》 副标题是英文的片假名
9f46d597 2b56 406d a37c 06a3fdd35a3a watermark
3DS版依然如此
8a6ce922 069b 4927 9577 d2b6d9f2ab14 watermark
PSP版“Persona” 似乎是刻意和本传做区分

至于怎么称呼《Persona》这个系列,随自己喜欢就行,不用特别在意。

P系列的轴心

《Persona》系列是一个相对连贯的系列

以现有的五代进行分类,可以将1代和2代归为“近代史”。从3代开始步入“现代史”。因为制作班底的变动,新旧时代有明显的分层。

荣格心理学为基础唯“心”主义世界观,是贯穿系列的主轴。人的意识(包括意识和潜意识)是影响“Persona”世界的核心因素,无论正邪都随人心的变化而发生变化

而且当意识足够强大时,会孕育出某些强大的个体,也就是游戏中常见的BOSS。但无论他们怎么称呼自己,或是怎样干预现实世界,他们的根源都是人的意识,所以“Persona”的世界中没有传统意义上的神

又因为人的意识是多样的,且在不断变化的,所以历代故事都会以不同形式出现。

65f21bb6 2d06 4716 afca cad5bb3aee96 watermark
马蒂斯解释Abyss of time的形成原因

“Persona”的概念

《Persona》系列除受到心理学影响外,还受到《JOJO》巨大的影响,主创之一 金子一马就是《JOJO》的粉丝。游戏在设计初期,就有做成一个“使用替身攻击的RPG”的打算,就连通过塔罗牌进行分类的方法,也是借用JOJO第三部的设定。

方向确定之后,为了节省资源,也就是俗称的“”,游戏借用了《真女神转生》的“仲魔”概念和美术设计,同时为了做到相互区分,将神魔元素和荣格心理学的理论相融合,发掘出属于自己的原创设定。也就是系列的核心概念之一——Persona。

9cbc1b60 3b72 4884 a26c be19e958e05f watermark
战车塔罗牌

虽然说“Persona“和“仲魔”是继承与被继承的关系,但“Persona”和《真女神转生》中的“仲魔”有本质上的差别。打个比方,仲魔是同生共死的战友,而Persona则是载具

仲魔是“超自然生物的总称”,拥有自己的意志。假装跪下来求饶,暗地里捅你一刀的事经常干。游戏中甚至有“仲魔”骗“仲魔”的剧情,足见“仲魔”智力的发达。

“Persona”则是人内心潜意识的具象化,和游戏中常见的怪物“Shadow”是同一类东西。二者的区别在于“Shadow”是狂暴化的潜意识,对人类具有极强的攻击性。“Persona”则是被驯服的“Shadow”,同时受人自身的性格意志力强弱的影响,会出现不同的外形和属性。

驯服后的“Persona”也并非绝对安全。因为二者同根同源,当人失去对“Persona”的控制时,“Persona”就会暴走,甚至退化成“Shadow”

062e7fb6 555d 4095 84e6 1e6316ec88a3 watermark
“Persona”威力强大,但也十分危险
02171cef 7012 45f3 81e1 28f30baa3482 watermark
失控的Persona试图杀死主人
F76ade1d 91a1 4942 a112 ae0eba90d95a watermark
受到法洛斯影响,P3主角失去对“Persona”的控制,“俄尔普斯”退化成了“死神”

跳出游戏中的概念,Persona的来自希腊语,代表“舞台上演员所佩戴的面具”, 有别于现在的“人格(Personality)”。“Persona”是为了适应社会生活,依附在自我之外的表象,是社会生活的产物。受到不同外界力量的影响,会产生不同的“Persona”。

举个例子,你对你的父母的态度,和你对你的同事的态度,肯定是有所不同。

所以理论上讲,一个人是拥有复数的Persona。P1和P2中的主角团都能自由切换“Persona”,恰好呼应了这一点此外不同的角色会对不同类型的小怪有自己的倾向性,和自己性格类似的小怪更容易收服。算是“臭味相投”吧。

D7bb72dc fa75 4c43 b2e5 ba630c4f9451 watermark
每个人都有自己的好恶

不同于之后的作品,P1和P2中主角团之所以能自由切换“Persona”,是因为都曾参加过名叫“Persona Game”的召唤仪式。通过仪式,费列蒙在确定主角团有控制住“Persona”的能力后,赋予主角团召唤“Persona”的资格,且全员都可以进入天鹅绒房间。

获得召唤“Persona”的资格不等于拥有召唤能力。一般当人身处险境或是在强烈意志驱动下,人才能召唤“Persona”,历代主角无一例外都是如此,不少出场角色都遵循此道。

仪式过程取材自日本的一个都市传说,P2只提过各个角色都参加过仪式。

http://ja.wikipedia.org/wiki/スクエア_(都市伝説)

通过仪式获得召唤“Persona”的资格的方式,在之后的作品再没有出现过,使得P1和P2中角色在P系列的地位,极为特殊

P3开始只有主角一人能拥有多个“Persona”。至于其他人物不能拥有多个“Persona”的原因,个人猜想是,因为他们没有进入天鹅绒房间的权限,无法连接到集体潜意识,所以他们只能使用自己的“Persona”.

P5中,明智吾郎从“邪神”那得到拥有复数“Persona”的能力,算是证明了我的猜想。

名为“Shadow”的心魔

在P系列早期“Shadow”这个概念还并非十分完善,不是常见的小怪,一般指从具体的角色潜意识中分离出来的一部分。

在P1中女主麻希的“Shadow”是极少的个例,开朗活泼,同时还是玩家的伙伴。

P1时期麻希的“Shadow”,产生的根源是麻希被压抑的情感,并非主流的“自己的阴暗面”这样的概念。

P5中佐仓双叶和麻希有不少接近的地方。但佐仓的“Shadow”是新旧两种概念的混合体。即保有佐仓被压抑的求生欲望和对母亲的爱,同时又持有“Shadow”对人类持有极强的敌意。

D3848d31 37ce 4aa2 8db5 67d8caf69c87 watermark
本人大概有这么阴暗

P2开始才拥有了和现在P系列主流的“Shadow”概念,代表角色的阴暗面或是不愿面对的一面,并且对人类持有敌意。像丽莎的“Shadow”就指出过她“内心的自私”,或是“Shadow”达哉直接挑明,达哉对淳的恨意。并且每个"Shadow"在被打败后,都会说“我是你的一部分”,警告角色你无法彻底消灭它。

同时“Shadow”在P2时期拥有了明显的外貌特征,眼睛呈现黄色或者红色模样。P3之后的作品将这一特征统一为黄色的眼睛

87b9cdaf 7187 4799 8784 9f6a95ac0cff watermark
P4U中桐条美鹤的“Shadow”

“Shadow”作为怪物的统称同样是从P3开始,以前使用“恶魔”来代指路上的小怪。

P3之后的“Shadow”大多是集体潜意识中负面情感的产物,和早期的“恶魔”是同一种东西,只是称呼上存在不同。

天鹅绒房间

天鹅绒房间是《Persona》系列的另一个核心概念。

在最原始的设定中,天鹅绒房间是费列蒙也就是集体潜意识中理智的集合体,所创造出来的介于“意识”与“物质”之间的一个房间。可以理解为联通“意识”和“物质”的一个通道。且这个通道并非只向“主角”开放,天鹅绒房间还会接纳各行各业的人。

P3之后又加上了“梦境”与“现实”的设定,要进入房间必须签订契约。但无论如何,只有在天鹅绒房间才能与“集体潜意识”相连接。

费列蒙作为天鹅绒房间的创造者,在P3之后就隐藏了起来,但并没有消失

费列蒙在现实世界的化身一直是蝴蝶,典故正是取自“庄周梦蝶”。费列蒙在P1和P2中是以金色蝴蝶的形象出现,P3开始变成蓝色蝴蝶,原因不明。

133552b2 5b6d 4c75 a552 2b1e3eed61bf watermark
Philemon(费列蒙) 人类理智潜意识的集合体
7cc9d77e c50b 42df 8a8a c348cdf0c001 watermark
美国玩家通过DoubleJumpBook的论坛和制作人之间的问答
A490b1d5 ca11 418e 9a50 76895e911c72 watermark
关于费列蒙的回答,制作人承认费列蒙以蓝色蝴蝶的样子出现

而天鹅绒房间的主人伊戈尔(Igor)同样是费列蒙的创造物,伊戈尔名字灵感来源自玛丽雪莱的著名小说科学怪人改编的电影《Son of Frankenstein》(1939)。

设定上,在见过许多人之后伊戈尔对自己的生命产生了疑问,他无法确定自己是“人”还是“人偶”,当伊戈尔找到答案那天,他就会离开天鹅绒房间。

伊戈尔的职责是负责接待各式各样的来宾。在游戏中,因为费列蒙只负责引导主角团,伊戈尔的作用就是替费列蒙,给主角团提供冒险路上所需要的力量。

4bc5ef54 0457 43b8 a656 87c33f331b37 watermark
伊戈尔手上拿着的手机是Persona的召唤器
Cb4f6411 dd98 4b00 b8f8 7047eafebaee watermark
电影中的“伊戈尔”和“科学怪人”
3db9584f 0857 424f 86b0 453fcfcc3428 watermark
P3F The Answer中艾吉斯进入天鹅绒房间
99b80397 2b37 4942 bb5f 0efbce471632 watermark
伊戈尔知道自己作为受造物的身份

值得一提的是,伊戈尔的声优田の中 勇于2010年去世。但在后续的作品中,依然使用田の中 勇旧的声音素材。新台词没有声音。真不是闹鬼。

包括最近的P5。当玩家在最后救出真伊戈尔后,就能听到那熟悉的声音了。

除了伊戈尔,天鹅绒房间里曾有过各具特色的助手。像鹅绒房间主题曲《所有人的灵魂之诗》的演奏者和演唱者——无名氏(Nameless)和贝拉冬娜(belladonna)。

A162cdd4 2f41 4785 aea5 832b8516d442 watermark
贝拉冬娜 进入房间后的女声就是她声音
073cd3d8 619f 49e7 8b0e 348f8e42973f watermark
无名氏 在天鹅绒房间里度过了900155天

金子一马化身的恶魔绘师。

F4cdf345 bb32 4192 95d1 a9b3f73b6e05 watermark
自己开心就好

呆萌且刻板的迪奥多。

D2ae11a3 0a1d 4f7f 839a a8b64553b5d2 watermark
助手们在现实世界中经常做出一些荒唐举动

为了解救P3主角,自愿离开天鹅绒房间,并且觉醒“愚者”牌的伊丽莎白。

98626cdf 4052 4361 b0eb d9f20fbbced5 watermark
玛丽格特、伊丽莎白、迪奥多 实际上是姐弟。玛丽格特最大,迪奥多最小。

使用的Persona全部为男性,因而在P4U2中被称作“驯养帅哥的残酷秘书”的玛丽格特

93b39d3b c531 4d16 8d02 1d47dac02e45 watermark
P系列有很多致敬《JOJO》的地方 这个姿势据说是模仿 约瑟夫 乔斯达

双子卡洛琳和贾斯汀,以及一位特殊的助手Lavenza。

220a5cec 5a60 4f21 9e8b d71a997dd966 watermark
没把主角当人看
0f1f17a4 ef66 4c3c 9228 f164d56c0f85 watermark
Lavenza

以上助手形态各异,并且除了贝拉冬娜和无名氏,其他几位的名字同样都来源于玛丽雪莱的小说《科学怪人》。而伊丽莎白和Lavenza实际上是将小说里的人物Elizabeth Lavenza拆开使用。其他人都能找到对应的角色。有兴趣的玩家可以研究一下。

《Persona》最初的起点

温故才能知新,在谈P系列后续作品之前,必须先介绍P1的制作班底。

剧本 里见直、制作人/监督 冈田耕始、美术 金子一马。

84f3ea4a 734c 4adf a688 ba29d5664a57 watermark
从左至右分别是 里见直、冈田耕始、金子一马

里见直,1970年生人,已离职。担任过P1、P2罪、P2罚的剧本创作,P1是其第一作,最后负责过的游戏是PS2平台上的《数码恶魔2(Digital Devil Saga 2)》。据报道“Persona”的世界观由里见、冈田、金子三人合力完成,其中大部分基础设定由里见创作,并且将荣格心理学与克苏鲁神话相融合,创作出了游戏中原创的世界观。

C68a67ef 2e0e 4d57 98d7 75ee4e8e9de0 watermark
神取鹰久 Persona是克苏鲁神话中的“奈亚拉托提普”

金子一马,1964年生人,没读过大学,据传画画全靠自学。1988年加入Altus,现成为Altus管理层,最近参加了《真女神转生4》《真女神转生4 Final》的设计工作。

冈田耕始,1964年生人,Altus创始人之一,参与了《真女神转生》系列的诞生和发展,同时担任过P1、P2罪、P2罚的监督和制作人。03年退社,离开后创建Gaia公司,在Altus最后一次负责的游戏是《真女神转生3》。

71e8d20a 9e1f 4be8 8eeb 1fe360abe1bf watermark
Gaia公司开发的极少数游戏之一 PSP《怪兽王国 晶石召唤师》
379017d3 3f58 4fe7 b985 c258eeb65666 watermark
另一款Gaia开发的游戏 PSP《密码之魂 伊迪亚的传承》

剧本采用新人里见直,美术设计和监督则继续由《真女神转生》的老班底继续担任,虽然换了一副新皮囊,但骨子里可没变。在P1和P2的游戏过程中能明显见到《真女神转生》的影子,甚至还有《真女神转生》的角色跑过来串场。肯定了本传和番外之间的联系。

52174365 18b0 4728 b6b3 2a5fb1eac69c watermark
内田たまき《真女生转生 if》的女性主角 在P2罪和罚中都有出现

同时P1和P2的故事背景虽然不同,但却用P1中的角色将两部游戏串联起来。P2 罚的结局中P1主角团更是齐聚一堂,迎接P1主角回归。这三人操刀的P1和P2,是P系列中联系最紧密的两作。

9e869ecd 3212 441f 8e26 a2894f39089e watermark
P2罚中,玩家可以根据传闻的选择,决定其中一人会成为伙伴
9a61031e 8d66 46fd a1db 1ce966941e9a watermark
黛 ゆきの P2罪中参战伙伴,在罚中也有露脸。铠甲式的外套设计很帅

P3开始的变革

在进入新世纪之后,P系列的两大轴心里见直和冈田耕始纷纷离开Altus。新作的大换血,势在必行。P3开始,监督和制作人变成了桥野桂,美术交棒给了副岛成记,音乐则由目黑将司担任。一个更加“好玩”的《Persona》系列出现在玩家面前。

这之中必须重点提下桥野桂,他在担任监督之前曾经在Altus做过两款游戏。一个是NDS上的《超执刀》,另一个是PS2的《真女神转生3》。前者是款好游戏,但和P系列没有任何关系。后者却

在进入新世纪之后,P系列的两大轴心里见直和冈田耕始纷纷离开Altus。新作的大换血,势在必行。P3开始,监督和制作人变成了桥野桂,美术交棒给了副岛成记,音乐则由黑目将司担任。一个更加“好玩”的《Persona》系列出现在玩家面前。

这之中必须重点提下桥野桂,他在担任监督之前曾经在Altus做过两款游戏。一个是NDS上的《超执刀》,另一个是PS2的《真女生转生3》。前者是款好游戏,但和P系列没有任何关系。后者却 奠定现在《Persona》系列的战斗系统

1c32243e 6596 402d 8ca7 c672847d00b6 watermark
桥野桂

在《真女神转生3》中,我方角色发动暴击或者使用弱点属性攻击,都能额外获得一次进攻机会。该系统的引入增强了战斗的 在《真女生转生3》中,我方角色发动暴击或者使用弱点属性攻击,都能额外获得一次进攻机会。该系统的引入增强了战斗的 策略性。并且在此基础上,制作团队吸收P2经验,创造出“群殴”系统,提高了战斗的速度。这两个系统在P3之后成为历代战斗系统的核心。

策略与速度并重的战斗,成为P系列的一项特色。

最近的P5中还吸收《真女神转生3》中Pass系统,即将进攻权交给我方队友。只是,《转生3》中不需要触发暴击和弱点,可以在任何时间转交,但不能选择对象,只能交给处于下一个移动顺序的队友。

最近的P5中还吸收《真女生转生3》中Pass系统,即将进攻权交给我方队友。只是,《转生3》中不需要触发暴击和弱点,可以在任何时间转交,但不能选择对象,只能交给处于下一个移动顺序的队友。

在继承了战斗系统的同时,桥野桂还对传统游戏模式进行改动,将游戏内容分为“打怪”和“日常养成”两大部分,两个模式交替进行。此举让《Persona》系列在恋爱养成的光明大道上迈出了一大步,成为建设具有Altus特色的恋爱游戏的桥头堡和急先锋。

玩笑归玩笑,但这些改动确实把《Persona》系列的可玩性提高到了一个新的高度。除去拿了不少奖之外,还曾在在日本掀起过关注。

在《心跳回忆3》神条芹華线的魔物决战事件中,背景音乐出现过和天鹅绒房间主题曲极为类似BGM。《心跳回忆3》是在P2 罚之后发售,足见《Persona》系列在当时已经拥有足够的影响力。

15723085 f9ba 47ef 88d5 6363d76c5183 watermark
难道说P3实际是对《心跳回忆》的致敬?

由台湾东立出版社发售的《村田雄介的漫画教室R》中,也出现过《Persona》的身影。

7877f2c9 0f57 47a6 8bde d87d6f296091 watermark
图中女性是《草莓100%》的作者河下水希

P5发售后,可以说是席卷整个日本,时至今日《Persona》已经拥有了超越《真女神转生》的影响力,现在这番场景估计连创始的班底都没想到吧。

P5发售后,可以说是席卷整个日本,时至今日《Persona》已经拥有了超越《真女生转生》的影响力,现在这番场景估计连创始的班底都没想到吧。

变革的背后

P3的变革,实际上是制作人桥野桂为了游戏性做的一种取舍。在保留RPG优点的同时,吸收其他游戏的模式,来弥补传统RPG的短板。

为了配合新的游戏模式,桥野桂一改传统RPG“先打3小时怪,再看3分钟剧情”的拖沓叙事。大幅提高了每段主线剧情的文本量浓缩度,并且尽力避免割裂剧情来拖长游戏时间。每当玩家打败一次BOSS,都能明显地感觉到主线剧情的推进,在之后的P4和P5中这种感觉更加明显

同时新增的养成模式,填充在被分开的剧情之间,既减缓了重复劳动的乏味感,又用丰富的内容和意外的惊喜,充实了游戏过程。这种“软硬兼施”的模式,为之后P系列的发展奠定了基础。

除了游戏上的改变之外,桥野桂为了游戏也对几个核心概念进行了调整,让他们更加接近荣格的理论。其中,养成模式的精华——社会关系,与荣格强调Persona源自“内在和外在两股未关联的力量共同塑造”的概念相呼应。

对天鹅绒房间新添的“位于‘梦境’与‘现实’之间”这段描述,实际上是对应“在集体潜意识领域,另一个真正的中心经常出现在梦境中”的概念。

而天鹅绒房间的助手,应该是肯定了潜意识具有双重性的特点。他们应该是集体潜意识中积极情感的集合体。

42b4699b e51f 4c1c a41c b5659e4b6ec0 watermark
助手们的眼睛和“Shadow”的眼睛同样是黄色的

散落各处的“彩蛋”

为了适应游戏模式的改变,桥野桂根据荣格的理论,不适合P3游戏的设定进行了统筹和修改。基本建立了现在P系列的基础。

虽然不可避免和奠基的两代之间存在极强的割裂感,但新时代和旧时代还是有联系的

比较容易发现的点。桐条美鹤和她老爸谈话时,她老爸提到了“南条集团”。并且强调“桐条集团”曾经从属“南条集团”,但现在脱离了出来。

比较隐蔽的,像每周日,到一楼大厅打开电视机能看到《不死鸟战队R》的报道。

49403717 0a46 48c1 ae3e 81acb2a20ab9 watermark
在后续的作品都有作为彩蛋出场 P5中也有提及

还有一个更隐蔽的,每个月的中有一两个特定时间,打开一楼的电视机,会出现一档栏目叫《偶然見かけた、こんな人》。

如果在12月30号打开电视会看到这么一段描述:“碰到一位衣着干练,年过30的男性,他是一位警察,而且非常喜欢甜食”。玩过P2应该都能反应过来,这个人就是周防克哉

B90d7776 5aaa 44b8 9bb6 3d2b6fdba600 watermark
因为某次事件,放弃了当糕点师的梦想,走上了警察的道路

在老美查找资料后,得到了确认。这个栏目几乎把所有P1和P2主要角色都提过一遍,每个人物的特征都有提及,有兴趣的朋友可以回P3去找找。

89a71b7b 4951 4bf3 ad1c 64d0209c5bb2 watermark
栏目主持人和坑钱小妖精同名

自P3起,“前作”中的角色,大多以彩蛋的形式登场。这些彩蛋穿插在游戏中细微的地方,不少平时不注意的地方都暗藏玄机

8e603d2f 4130 4eaf a693 b23d009b7a79 watermark
《凯瑟琳》中的文森特也在P3P中作为彩蛋登场

不过还是有不少角色,直接出场或是在对话当中被提到。像P4的伏见千寻和田中社长。主角们一般只是被间接地提到,像P5中白钟直斗和P4中的桐条美鹤。这些或明或暗的提示,都明确了P3之后的作品都处于同一世界当中。

可是,因为世界观的大改,加之P1和P2中角色并没有在游戏中出现过。导致大改之后,新老作品之间更像是处在平行世界

06ef7b1c c718 4387 9e19 561b17b8d06b watermark
在P3主角的“攻略”下伏见千寻克服了男性恐惧症
644de46c efe9 4ead 96ff 600997306ced watermark
官方公式书也间接承认了P3和P4的联系

P4不可思议的进化

P3的成功,是毋庸置疑的。但作为系列一个新的尝试,大大小小的问题遍布在游戏各处,有剧情上的也有游戏上的。P3很棒,但有缺陷

这些问题当中,最严重的硬伤是,主角在剧情中的参与度

P3主角只需要聊聊天,打打怪,到了满月去打打卡,躺在床上就能拯救世界。完全是事件来找主角不是主角主动解决事件

天天睡大觉还能拯救世界,说P3主角是RPG史上最轻松的主角,我觉得并不过分。

91012bc1 d6af 496d a540 ffc98cace2f4 watermark
就是这种感觉

而作为续作登场的P4,却在极短的时间内改进P3的诸多的问题,要知道P3和P4之间只隔了2年的时间。而且这两年的时间里,还夹杂了一个P3F。并且制作人曾透露过P4从立项到做完大致花了一年半的时间。

桥野桂能这么短的时间里,做到如此大的突破,在P3的基础上实现质的飞越,确实让人佩服

剧情上,制作团队将主角团设定为一个“少年侦探团”,通过自己的努力和推理来推动剧情发展。虽然推理不算精妙但合理,并且在“破案”的过程中主角团会遇到不少障碍和误导,让剧情一波三折,在破除万难之后,主角团才破解了事件的真相。

虽然不是多新颖的处理方法,但很有效。P5中剧情的安排也极为类似。

游戏上,游戏各个方面也得到全面的进化,更多的细节要素,更有趣的过程,更潮的设计。从抓虫子到钓鱼,吃饭或是打工,日常要素极为丰富。人物的培养也更加多样,社会关系的设计也更加巧妙,游戏上几乎挑不出明显的缺点。

P4绝对是一款好玩的游戏。

81093d0c 6348 4d16 aba5 75f2f27d7de8 watermark
不少玩家都是P4G入的坑

加强版必出的“永劫”

本传之后,P3和P4陆续推出了加强版。除了,新增的对话和语音之外,还添加了一张新的塔罗牌——永劫。

P3F中, 游戏在传统塔罗牌体系之外,首次创造出了“永劫”这张牌。

P4G中,“永劫”再次出现。

P4G和P3F基本确定了,加强版必出“永劫”的规律。

20ddc133 a619 4747 a0ac 3a817877c8e1 watermark
第20号牌

“永劫”在英文版中被翻译成了“Aeon”,英文释义是“extremely long period of time”。和“永远”的含义接近。

“永劫”的数字是20。而在传统塔罗牌中,第20号正好是“审判”。P3曾经介绍过“审判”的含义——旅程的终点。游戏中,只有当“审判”的等级达到最高,故事才会迎来最终的结局。

所以,“永劫”可以理解为——通往真相的关键

而且,“永劫牌”的持有者,都有一个共性——探寻自身存在的意义。玛丽如此,艾吉斯也是如此。如果再考虑到,这二位都是非人类,且都和天鹅绒房间有关系的话,这可能会是未来P5中“永劫牌”持有者的特点。

不过,“永劫牌”只会在“加强版”中出现,说白了,要了解事件的真相就准备掏钱吧(原来永劫是这个意思)。

937e0f58 846e 4879 a46e 07e550454552 watermark
要是“永劫”还是女性,那可攻略人物就有10人之多,离主角变身12翼大天使的日子不远了
F102efe0 d767 4dd8 8346 320e676be793 watermark

、该

“献给所有喜欢RPG的玩家”

一句话概括P5——这是能玩到最好玩的“Persona”。对我而言,配得上游戏盒后面“献给所有喜欢RPG玩家”的标语。

游戏水平的又一次全面提升。最重要的是,故事中使用了不少老的情节,而且自P3F之后,全员再次进去天鹅绒房间。也许在之后的作品,真能见到老角色的回归也说不定。

88f06eb5 a07c 4db9 b03a 022571cc59cc watermark
峰回路转,P5重拾不少老的情节

我相信,P5绝对不是这个系列的极限,之后的作品一定还会更好玩。期待之后的作品。

C3d92dd9 17b5 4ffa bfe1 0adaf8befb1f watermark
P5的成功也要感谢美术团队在背后的付出
Read the whole story
yee
285 days ago
reply
39.965424,116.324526
Share this story
Delete
Next Page of Stories