About

Trey is SVP of Engineering @ Lucidworks, co-author of Solr in Action, founder or Celiaccess.com, researcher/ public speaker on search, analytics, recommendation systems, and natural language processing.

I was invited to speak on 2015.11.10 at the Bay Area Search Meetup in San Jose, CA. With over 175 people marked as attending (and several more on the waitlist who showed up), we had a very exciting and lively discussion for almost 2 hours (about 1/2 was my presentation, with the other half being Q&A mixed in throughout). Thanks again to eBay for hosting the event and providing pizza and beverages, and to everyone who attended for the warm welcome and great discussions.

Video:

Slides:
http://www.slideshare.net/treygrainger/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation

Talk Summary:
Search engines frequently miss the mark when it comes to understanding user intent. This talk will walk through some of the key building blocks necessary to turn a search engine into a dynamically-learning “intent engine”, able to interpret and search on meaning, not just keywords. We will walk through CareerBuilder’s semantic search architecture, including semantic autocomplete, query and document interpretation, probabilistic query parsing, automatic taxonomy discovery, keyword disambiguation, and personalization based upon user context/behavior. We will also see how to leverage an inverted index (Lucene/Solr) as a knowledge graph that can be used as a dynamic ontology to extract phrases, understand and weight the semantic relationships between those phrases and known entities, and expand the query to include those additional conceptual relationships.

As an example, most search engines completely miss the mark at parsing a query like (Senior Java Developer Portland, OR Hadoop). We will show how to dynamically understand that “senior” designates an experience level, that “java developer” is a job title related to “software engineering”, that “portland, or” is a city with a specific geographical boundary (as opposed to a keyword followed by a boolean operator), and that “hadoop” is the skill “Apache Hadoop”, which is also related to other terms like “hbase”, “hive”, and “map/reduce”. We will discuss how to train the search engine to parse the query into this intended understanding and how to reflect this understanding to the end user to provide an insightful, augmented search experience.

Topics: Semantic Search, Apache Solr, Finite State Transducers, Probabilistic Query Parsing, Bayes Theorem, Augmented Search, Recommendations, Query Disambiguation, NLP, Knowledge Graphs

I was excited to be selected again this year to present at Lucene/Solr Revolution 2015 in Austin, Texas. My talk today focused on one the main areas of focus for me over the last year - building out a highly relevant and intelligent semantic search system. While I described and provided demos on the capabilities of the entire system (and many of the technical details for how someone could implement a similar system), I spent the majority of the time on the core Knowledge Graph we’ve built using Apache Solr to dynamically understand the meaning of any query or document that is provided as search input. This Solr-based Knowledge Graph - combined with a probabilistic, entity-based query parser, a sophisticated type-ahead prediction mechanism, spell checking, and a query-augmentation stage - is core to the Intent Engine we’ve built to be able to search on “things, not strings”, and to truly understand and match based upon the intent behind the user’s search.

Video:

Slides:
http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine

Talk Summary:
Search engines frequently miss the mark when it comes to understanding user intent. This talk will describe how to overcome this by leveraging Lucene/Solr to power a knowledge graph that can extract phrases, understand and weight the semantic relationships between those phrases and known entities, and expand the query to include those additional conceptual relationships. For example, if a user types in (Senior Java Developer Portland, OR Hadoop), you or I know that the term “senior” designates an experience level, that “java developer” is a job title related to “software engineering”, that “portland, or” is a city with a specific geographical boundary, and that “hadoop” is a technology related to terms like “hbase”, “hive”, and “map/reduce”. Out of the box, however, most search engines just parse this query as text:((senior AND java AND developer AND portland) OR (hadoop)), which is not at all what the user intended. We will discuss how to train the search engine to parse the query into this intended understanding, and how to reflect this understanding to the end user to provide an insightful, augmented search experience.

Topics: Semantic Search, Finite State Transducers, Probabilistic Parsing, Bayes Theorem, Augmented Search, Recommendations, NLP, Knowledge Graphs

I was invited to present again this year at Lucene/Solr Revolution 2014 in Washington, D.C. My presentation took place this afternoon and covered the topic of “Semantic & Mulilingual Strategies in Lucene/Solr. The material was taken partially from the extensive Multilingual Search chapter (ch. 14) in Solr in Action and from some of the exciting semantic search work we’ve been doing recently at CareerBuilder.

Video:

Slides:
http://www.slideshare.net/treygrainger/semantic-multilingual-strategies-in-lucenesolr

Talk Summary: When searching on text, choosing the right CharFilters, Tokenizer, stemmers, and other TokenFilters for each supported language is critical. Additional tools of the trade include language detection through UpdateRequestProcessors, parts of speech analysis, entity extraction, stopword and synonym lists, relevancy differentiation for exact vs. stemmed vs. conceptual matches, and identification of statistically interesting phrases per language. For multilingual search, you also need to choose between several strategies such as: searching across multiple fields, using a separate collection per language combination, or combining multiple languages in a single field (custom code is required for this and will be open sourced). These all have their own strengths and weaknesses depending upon your use case.

This talk will provide a tutorial (with code examples) on how to pull off each of these strategies as well as compare and contrast the different kinds of stemmers, review the precision/recall impact of stemming vs. lemmatization, and describe some techniques for extracting meaningful relationships between terms to power a semantic search experience per-language. Come learn how to build an excellent semantic and multilingual search system using the best tools and techniques Lucene/Solr has to offer!

My team was fortunate to have 2 papers accepted for publication through the 2014 IEEE International Conference on Big Data, held last week in Washington, D.C. I presented one of the papers titled “Crowdsourced Query Augmentation through the Semantic Discovery of Domain-specific Jargon.” The slides and video (coming soon) are posted below for anyone who could not make the presentation in person.

Slides:

Paper Abstract: Most work in semantic search has thus far focused upon either manually building language-specific taxonomies/ontologies or upon automatic techniques such as clustering or dimensionality reduction to discover latent semantic links within the content that is being searched. The former is very labor intensive and is hard to maintain, while the latter is prone to noise and may be hard for a human to understand or to interact with directly. We believe that the links between similar user’s queries represent a largely untapped source for discovering latent semantic relationships between search terms. The proposed system is capable of mining user search logs to discover semantic
relationships between key phrases in a manner that is language agnostic, human understandable, and virtually noise-free.

I was fortunate to be able to speak last week (along with Joe Streeky, my Search Infrastructure Development Manager) at the very first Atlanta Solr Meetup held at Atlanta Tech Village. The talk covered how we scale Solr at CareerBuilder to power our recommendation engine, semantic search platform, and big data analytics products. Thanks to everyone who came out for a great event and to LucidWorks who sponsored us with the meeting place, pizza, and drinks.

Video:

Slides:
http://www.slideshare.net/treygrainger/scaling-recommendations-semantic-search-data-analytics-with-solr

Talk Summary: CareerBuilder uses Solr to power their recommendation engine, semantic search, and data analytics products. They maintain an infrastructure of hundreds of Solr servers, holding over a billion documents and serving over a million queries an hour across thousands of unique search indexes. Come learn how CareerBuilder has integrated Solr into their technology platform (with assistance from Hadoop, Cassandra, and RabbitMQ) and walk through api and code examples to see how you can use Solr to implement your own real-time recommendation engine, semantic search, and data analytics solutions.

Timothy Potter and I were recently interviewed about the launch of our new book, Solr in Action, which was published last month. If you want to learn more about the book or just hear about our two-year journey to bring what critics are calling the “definitive guide” to Solr to market, please check out the podcast below:

Description:
This week, the SolrCluster team is joined by Trey Grainger, Director of Engineering for Search at CareerBuilder, and Timothy Potter, Lucene/Solr Committer and senior engineer at LucidWorks, to discuss their recently released co-authored book, Solr in Action. Solr in Action is a comprehensive guide to implementing scalable search with Lucene/Solr, based on the real-world applications that Tim and Trey have worked on throughout the course of their careers in Solr. Tim and Trey share with us the challenges they faced, accomplishments they achieved, and what they learned in the process of co-authoring their first book.

SolrCluster is hosted by: Yann Yu and Adam Johnson. Questions? email us at solrcluster@lucidworks.com or on twitter @LucidWorks #solrcluster.

Solr in Action is Published!

March 26th, 2014

After nearly two years of writing, editing, and coding up examples, I’m excited to announce that Solr in Action has finally been published! We released our first “early access” version back in October of 2012 and have since been working tirelessly to round out this comprehensive (664 pages!) guide covering versions through Solr 4.7.

Solr in Action cover

Solr in Action is an essential resource for implementing fast and scalable search using Apache Solr. It uses well-documented examples ranging from basic keyword searching to scaling a system for billions of documents and queries. With this book, you’ll gain a deep understanding of how to implement core Solr capabilities such as faceted navigation through search results, matched snippet highlighting, field collapsing and search results grouping, spell-checking, query autocomplete, querying by functions, and more. You’ll also see how to take Solr to the next level, with deep coverage of large-scale production use cases, sophisticated multilingual search, complex query operations, and advanced relevancy tuning strategies.

Solr in Action is intentionally designed to be a learning guide as opposed to a reference manual. It builds from an initial introduction to Solr all the way to advanced topics such as implementing a predictive search experience, writing your own Solr plugins for function queries and multilingual text analysis, using Solr for big data analytics, and even building your own Solr-based recommendation engine.

The book uses fun real-world examples, including analyzing the text of tweets, searching and faceting on restaurants, grouping similar items in an ecommerce application, highlighting interesting keywords in UFO sighting reports, and even building a personalized job search experience. Executable code for all examples is included with the book, and several chapters are available for free at the publisher’s website.

I just got back from a fantastic trip to Dublin, Ireland for last week’s Lucene/Solr Revolution EU. I was privileged this year to to present a deep dive (75 minute) session on “Enhancing Relevancy through Personalization & Semantic Search.” I appreciate all the great questions and feedback from everyone who attended.

Video:

Slides:
http://www.slideshare.net/treygrainger/enhancing-relevancy-through-personalization-semantic-search-28741313

Talk Summary: Matching keywords is just step one in the effort to maximize the relevancy of your search platform. In this talk, you’ll learn how to implement advanced relevancy techniques which enable your search platform to “learn” from your content and users’ behavior.

Topics will include automatic synonym discovery, latent semantic indexing, payload scoring, document-to-document searching, foreground vs. background corpus analysis for interesting term extraction, collaborative filtering, and mining user behavior to drive geographically and conceptually personalized search results.

You’ll learn how CareerBuilder has enhanced Solr (also utilizing Hadoop) to dynamically discover relationships between data and behavior, and how you can implement similar techniques to greatly enhance the relevancy of your search platform.

I just made it back from the beautiful, sunny city of San Diego where LucidWorks hosted another fantastic Lucene/Solr Revolution conference this week. I was invited back this year to present on “Building a Real-time, Big Data Analytics Platform with Solr.” Thank you to everyone who came and packed out the room, especially those who provided great feedback afterward and asked all of the terrific questions!

Video:

Slides: http://www.slideshare.net/treygrainger/building-a-real-time-big-data-analytics-platform-with-solr

Talk Summary: Having “big data” is great, but turning that data into actionable intelligence is where the real value lies. This talk will demonstrate how you can use Solr to build a highly scalable data analytics engine to enable customers to engage in lightning fast, real-time knowledge discovery.

At CareerBuilder, we utilize these techniques to report the supply and demand of the labor force, compensation trends, customer performance metrics, and many live internal platform analytics. You will walk away from this talk with an advanced understanding of faceting, including pivot-faceting, geo/radius faceting, time-series faceting, function faceting, and multi-select faceting. You’ll also get a sneak peak at some new faceting capabilities just wrapping up development including distributed pivot facets and percentile/stats faceting, which will be open-sourced.

The presentation will be a technical tutorial, along with real-world use-cases and data visualizations. After this talk, you’ll never see Solr as just a text search engine again.

Solr in Action

I’m excited to announce early access availability of Solr in Action, a book on Apache Solr 4 which I am co-authoring with Timothy Potter. The MEAP (Manning Early Access Program) released today, which means that you can purchase the book early and receive new chapters as they are being written so that you don’t have to wait for the final release before having access. Three chapters are currently available (”Introduction to Solr”, “Key Solr Concepts”, and “Indexing”), and we expect a new chapter to be released every few weeks.

Please consider heading over to solrinaction.com and picking up a copy today!

Book Summary:
Whether you’re handling big data, building cloud-based services, or developing multi-tenant web applications, it’s vital to have a fast, reliable search solution. Apache Solr is a scalable and ready-to-deploy open-source full-text search engine powered by Lucene. It offers key features like multi-lingual keyword searching, faceted search, intelligent matching, and relevancy weighting right out of the box. Solr 4 provides new features to enable large-scale distributed search solutions that can be deployed as an elastically scaling cloud-based service and can provide additional intelligence to other big data technologies like Hadoop and Mahout.

Solr in Action is a comprehensive guide to implementing scalable search using Apache Solr 4. This clearly-written book walks you through well-documented examples ranging from basic keyword searching to scaling a system for billions of documents and queries. You’ll gain a deep understanding of how to implement core Solr capabilities such as faceted navigation through search results, matched snippet highlighting, field collapsing and search results grouping, spell checking, query auto-complete, querying by functions, and geo-spatial searching.

Along the way, you’ll discover more advanced topics, such as scaling Solr for large production environments, best practices and strategies for handling multi-lingual content, building a Solr-powered recommendation engine, performing complex data analytics, and integrating Solr with other big data technologies for machine learning and knowledge discovery.

You will also learn how Solr’s relevancy algorithm works, best practices and tricks for tuning and measuring your search relevancy, and even how to write and integrate Solr plugins and patches to introduce your own great new search features.