Realtime revolution at work REAL-TIME SEARCH AT YAMMER May 25, 2011 By Boris Aleksandrovsky
http://www.linkedin.com/in/baleksan Yammer, Inc.
http://www.linkedin.com/in/baleksan
• Communication is hard, search is harder • What me grammar? • Private language • Conversational language • Time compressed • Transient • Poorly organized • Authority is suspect • Social pressures
2
3
Challenges - From information to knowledge
Personalized Search
Metadata
Messages
4
Knowledge
Retention
Facts
Engagement
Information
Attention
Agenda • Background • Why search? • Indexing • Search • Tools and methodologies • Lessons learned • Future • Q&A
5
: Putting Social Media to Work
Knowledge Management: Document-oriented
Social Media: People-centric
6
Enterprise Collaboration:
Outcome-focused
Similar to:
Yammer makes work
• • • •
• Real-time, Social, Mobile • Collaborative, Contextual • More Human!
Facebook Twitter Wikis Groups
Yammer: The Enterprise Social Network Easy. Shared. Searchable. Real-time. Where your company’s knowledge lives. • Messaging and Feeds • Direct Messaging • User Profiles • Company Directory • Groups (Internal) • Communities (External) • File Sharing • Applications • Integrations • Web, Desktop, Mobile, Tablet • Translations • Network Consultation and
Support
7
100,000+ companies, including 85% of the Fortune 500 – and growing.
8
What do you discuss at work, and with whom? What do our employees think of our 401K program? Is everybody saving?
Who will I be working with on this new project?
What’s the latest with the XYZ account?
How can my team better prepare for our next product release?
What will be discussed at our Quarterly Sales Kickoff?
Where can I find out more about customer events here at the ABC conference? Who’s free to meet up?
What are our recommendations for financial and regulatory reform given the latest news about…?
Who has any fresh ideas for…
• Who do you need to communicate with, across the company? • How often are the same questions asked? • Who has the answers? Who has new ideas? Who can help? 9
10
Search use case - Transient Awareness • Reverse-chronological • Simple queries • Facet • Date • Sender • Group
11
Search use case - Knowledge Exploration • Complicated relevance story • tf/idf • popularity • engagement • social distance • Complicated queries • Facet • Date • Sender • Group • Object type
12
Challenges for Yammer’s search engine • More knowledge is generated in realtime • Availability latency < 1 sec • Not always well formed • Complicated relevance story • experts and their reputation • popularity • social graph • tagging/topics • engagement signals • timeliness • location
13
Team • 2 engineers • 8 man months • Lots of fun
14
Indexing â&#x20AC;˘ DB to replica
15
16
Replication • Independent near-replicas based on a single distributed
source of truth • Can (will) get out of sync • Automatic monitoring of replication quality • Are replicas out of sync with other replicas? • number of docs • alert > X • Are replicas out of sync with the DB? • statistical sample of docs
17
Indexing â&#x20AC;˘ In-replica to index
18
30s
19
Why is it hard? •No timeliness guarantee •Fragmentation •Out-of-order deliveries •Index dependencies • Need to denormalize the information •Need to build for network partition tolerance and redundancy •But • Eventual consistency • Eventual delivery
20
How do we cope? •Out of order delivery source of (most) evil •? •
A) Assure in-order delivery • buffer and wait • degrades performance, availability and timeliness and is only
very eventual consistent •
B) Minimize probability and ignore • timestamp precision • clock skew
•
C) Arbitrate • timestamp / vector clocks • semantics • need to index lifecycle events
•21Need to build for network partition tolerance and
Delete-update race • [create Message “hello” id=5 ts=12:34:39] • [delete Message “hello there” id=5 ts=12:45:01] • [modify Message “hello there” id=5 ts=12:45:01]
22
id
timestamp
tombstone
5
12:34:39
no
5
12:45:01
yes
Multiple update race • [create Message “hello” id=5 ts=12:34:39] • [modify Message “hello there now” id=5 ts=12:45:01] • [modify Message “hello there” id=5 ts=12:45:01]
23
id
timestamp
text
5
12:34:39
hello
5
12:45:01
hello there now
Dupes • [create Message “hello” id=5 ts=12:34:39] • [like Message id=5 userId=3 ts=12:45:01] • [like Message id=5 userId=3 ts=12:45:02] • [unlike Message id=5 userId=3 ts=12:45:04]
24
id
timestamp
numLikes
5
12:34:39
0
5
12:45:01
1
5
12:45:02
1
5
12:45:04
0
Thread example
25
Zoie • Realtime indexing system • Open sourced by LinkedIn • Used by LinkedIn in production for about 3 years • Deployed at dozen or so locations • Thanks Xiaoyang Gu, Yasuhiro Matsuda, John Wang and Lei Wang
26
Zoie • Push events into buffer and the transaction log • Push buffer into Zoie • When Zoie commits, transaction log is truncated.
•
27
Indexing HA • Cluster queue systems • Round-robin of Rabbits introduce further out-of-order
problems. • Transaction log • Between RabbiMQ dequeue and Zoie disk commit
28
Dual indexing • Primary for serving out • Secondary for reindexing • Verify secondary index consistency • foreach replica do • shutdown • mv secondary to primary • restart • Availability should not be affected except for slight
chance of system failure
29
Index consistency problems • Detect • integrity check against the :source of truth: • Reindex • gaps • whole • reindex into secondary, swap with primary • Repair • patch in place • run on restart
30
Search
â&#x20AC;˘ <insert animated architecture slide>
31
Goal • 50/50-500/100 per partition • 50M docs • 50 msec P75 - 500 msec P99 • 100 qps
32
REST-full API over HTTP
â&#x20AC;˘ http://search.yammer.com:8085/api/search/1/1?query=i&start
33
Payload • Payload is usually small json object • For security reasons only ids and scores are send out • One page (usually 10 items) x 6 index types.
34
Payload
35
Web Server • Jersey over Jetty • http://jetty.codehaus.org/jetty/ • Custom configuration • tuned to the required 100 qps • generally impeccable, occasional lock contention • http://jsr311.java.net/ • Annotation driven • Much easier to test
36
Search master • More like a router • Knows about partitioning scheme • Performs load normalization • Call all, take the first • Possible to use multicast • Round Robin • switch to for scale • DLB (Least busy) • Maintains primary SLA metrics
37
Partitioning • Simple Jenkins 64bit hash of networkId • 2 level hash to split large partitions • Exception list to split large partition • Limitation: Cannot partition inside a single network • Repartitioning story is expensive • Consistent hashing?
38
Testing • Indexing • Idempotent • Out-of-order delivery • Duplicate and incomplete docs tolerance • 10K docs delivered in random order with X% of
dupes and Y% incomplete records • Search • Small manual index by recording event • Unit style tests (testng) with Asserts
39
Production • Measure • Hardware is cheap, people are not • People require more maintenance • Have enough redundancy
40
Metrics â&#x20AC;˘ JVM, Queue, Logging and Configuration
41
Metrics â&#x20AC;˘ Gauges
42
Metrics â&#x20AC;˘ Meters
43
Metrics â&#x20AC;˘ Timers
44
Metrics â&#x20AC;˘ https://github.com/codahale/metrics
45
Lessons • Do not underestimate your data model • Tradeoff between consistency, RT availability and
correctness • Measure • Flexible partitioning scheme • Data recovery plan
46
Future • Dynamic routing • Zookeeper • Partition rebalancing • Multiple sub-partitions with different SLAs • Work on relevancy • Multiple languages • Document parsing • External data • Scala
47
Q&A Session: Whatâ&#x20AC;&#x2122;s On Your Mind?
48