Tuesday, June 10, 2008

Cookies are for Closers » LinkedIn Architecture

LinkedIn Architecture

Tag: ScalabilityOren Hurvitz @ 12:20 am

At JavaOne 2008, LinkedIn employees presented two sessions about the LinkedIn architecture. The slides are available online:

These slides are hosted at SlideShare. If you register then you can download them as PDF’s.

This post summarizes the key parts of the LinkedIn architecture. It’s based on the presentations above, and on additional comments made during the presentation at JavaOne.

Site Statistics

  • 22 million members
  • 4+ million unique visitors/month
  • 40 million page views/day
  • 2 million searches/day
  • 250K invitations sent/day
  • 1 million answers posted
  • 2 million email messages/day

Software

  • Solaris (running on Sun x86 platform and Sparc)
  • Tomcat and Jetty as application servers
  • Oracle and MySQL as DBs
  • No ORM (such as Hibernate); they use straight JDBC
  • ActiveMQ for JMS. (It’s partitioned by type of messages. Backed by MySQL.)
  • Lucene as a foundation for search
  • Spring as glue

Server Architecture

2003-2005

  • One monolithic web application
  • One database: the Core Database
  • The network graph is cached in memory in The Cloud
  • Members Search implemented using Lucene. It runs on the same server as The Cloud, because member searches must be filtered according to the searching user’s network, so it’s convenient to have Lucene on the same machine as The Cloud.
  • WebApp updates the Core Database directly. The Core Database updates The Cloud.

2006

  • Added Replica DB’s, to reduce the load on the Core Database. They contain read-only data. A RepDB server manages updates of the Replica DB’s.
  • Moved Search out of The Cloud and into its own server.
  • Changed the way updates are handled, by adding the Databus. This is a central component that distributes updates to any component that needs them. This is the new updates flow:
    • Changes originate in the WebApp
    • The WebApp updates the Core Database
    • The Core Database sends updates to the Databus
    • The Databus sends the updates to: the Replica DB’s, The Cloud, and Search

2008

  • The WebApp doesn’t do everything itself anymore: they split parts of its business logic into Services.
    The WebApp still presents the GUI to the user, but now it calls Services to manipulate the Profile, Groups, etc.
  • Each Service has its own domain-specific database (i.e., vertical partitioning).
  • This architecture allows other applications (besides the main WebApp) to access LinkedIn. They’ve added applications for Recruiters, Ads, etc.

The Cloud

  • The Cloud is a server that caches the entire LinkedIn network graph in memory.
  • Network size: 22M nodes, 120M edges.
  • Requires 12 GB RAM.
  • There are 40 instances in production
  • Rebuilding an instance of The Cloud from disk takes 8 hours.
  • The Cloud is updated in real-time using the Databus.
  • Persisted to disk on shutdown.
  • The cache is implemented in C++, accessed via JNI. They chose C++ instead of Java for two reasons:
    • To use as little RAM as possible.
    • Garbage Collection pauses were killing them. [LinkedIn said they were using advanced GC's, but GC's have improved since 2003; is this still a problem today?]
  • Having to keep everything in RAM is a limitation, but as LinkedIn have pointed out, partitioning graphs is hard.
  • [Sun offers servers with up to 2 TB of RAM (Sun SPARC Enterprise M9000 Server), so LinkedIn could support up to 1.1 billion users before they run out of memory. (This calculation is based only on the number of nodes, not edges). Price is another matter: Sun say only "contact us for price", which is ominous considering that the prices they do list go up to $30,000.]

The Cloud caches the entire LinkedIn Network, but each user needs to see the network from his own point of view. It’s computationally expensive to calculate that, so they do it just once when a user session begins, and keep it cached. That takes up to 2 MB of RAM per user. This cached network is not updated during the session. (It is updated if the user himself adds/removes a link, but not if any of the user’s contacts make changes. LinkedIn says users won’t notice this.)

As an aside, they use Ehcache to cache members’ profiles. They cache up to 2 million profiles (out of 22 million members). They tried caching using LFU algorithm (Least Frequently Used), but found that Ehcache would sometimes block for 30 seconds while recalculating LFU, so they switched to LRU (Least Recently Used).

Communication Architecture

Communication Service

The Communication Service is responsible for permanent messages, e.g. InBox messages and emails.

  • The entire system is asynchronous and uses JMS heavily
  • Clients post messages via JMS
  • Messages are then routed via a routing service to the appropriate mailbox or directly for email processing
  • Message delivery: either Pull (clients request their messages), or Push (e.g., sending emails)
  • They use Spring, with proprietary LinkedIn Spring extensions. Use HTTP-RPC.

Scaling Techniques

  • Functional partitioning: sent, received, archived, etc. [a.k.a. vertical partitioning]
  • Class partitioning: Member mailboxes, guest mailboxes, corporate mailboxes
  • Range partitioning: Member ID range; Email lexicographical range. [a.k.a. horizontal partitioning]
  • Everything is asynchronous

Network Updates Service

The Network Updates Service is responsible for short-lived notifications, e.g. status updates from your contacts.

Initial Architecture (up to 2007)

  • There are many services that can contain updates.
  • Clients make separate requests to each service that can have updates: Questions, Profile Updates, etc.
  • It took a long time to gather all the data.

In 2008 they created the Network Updates Service. The implementation went through several iterations:

Iteration 1

  • Client makes just one request, to the NetworkUpdateService.
  • NetworkUpdateService makes multiple requests to gather the data from all the services. These requests are made in parallel.
  • The results are aggregated and returned to the client together.
  • Pull-based architecture.
  • They rolled out this new system to everyone at LinkedIn, which caused problems while the system was stabilizing. In hindsight, should have tried it out on a small subset of users first.

Iteration 2

  • Push-based architecture: whenever events occur in the system, add them to the user’s "mailbox". When a client asks for updates, return the data that’s already waiting in the mailbox.
  • Pros: reads are much quicker since the data is already available.
  • Cons: might waste effort on moving around update data that will never be read. Requires more storage space.
  • There is still post-processing of updates before returning them to the user. E.g.: collapse 10 updates from a user to 1.
  • The updates are stored in CLOB’s: 1 CLOB per update-type per user (for a total of 15 CLOB’s per user).
  • Incoming updates must be added to the CLOB. Use optimistic locking to avoid lock contention.
  • They had set the CLOB size to 8 kb, which was too large and led to a lot of wasted space.
  • Design note: instead of CLOB’s, LinkedIn could have created additional tables, one for each type of update. They said that they didn’t do this because of what they would have to do when updates expire: Had they created additional tables then they would have had to delete rows, and that’s very expensive.
  • They used JMX to monitor and change the configuration in real-time. This was very helpful.

Iteration 3

  • Goal: improve speed by reducing the number of CLOB updates, because CLOB updates are expensive.
  • Added an overflow buffer: a VARCHAR(4000) column where data is added initially. When this column is full, dump it to the CLOB. This eliminated 90% of CLOB updates.
  • Reduced the size of the updates.

[LinkedIn have had success in moving from a Pull architecture to a Push architecture. However, don't discount Pull architectures. Amazon, for example, use a Pull architecture. In A Conversation with Werner Vogels, Amazon's CTO, he said that when you visit the front page of Amazon they typically call more than 100 services in order to construct the page.]



The presentation ends with some tips about scaling. These are oldies but goodies:

  • Can’t use just one database. Use many databases, partitioned horizontally and vertically.
  • Because of partitioning, forget about referential integrity or cross-domain JOINs.
  • Forget about 100% data integrity.
  • At large scale, cost is a problem: hardware, databases, licenses, storage, power.
  • Once you’re large, spammers and data-scrapers come a-knocking.
  • Cache!
  • Use asynchronous flows.
  • Reporting and analytics are challenging; consider them up-front when designing the system.
  • Expect the system to fail.
  • Don’t underestimate your growth trajectory.

No comments:

Post a Comment