MozCon 2019 is an SEO conference held in the Washington State Convention Center (WSCC) in Seattle, United States. This insightful digital marketing conference had an array of fantastic and insightful presentations coordinated by reputable industry experts. We start off this Ignite Search blog post series with none other than Joelle Irvine, and for good reason too.
As Visual Search (VS) continues to evolve and revolutionise how consumers find and buy products, it only seemed fitting to dive into Joelle’s presentation as she discusses the latest technology in VS. Her presentation comes with a delightful combination of humour, personal stories and case studies to demonstrate the endless opportunities we can take in VS. Without further ado, let’s get going.
WHO IS JOELLE IRVINE?
Before jumping into her presentation, we thought we should first introduce Joelle Irvine. Joelle is the Director of Marketing & Growth at Bookmark, with over 12 years of experience in marketing, business development and project management. She is a result-driven strategic thinker with a keen eye on VS optimisation and a strong interest in the latest VS applications and platforms such as Pinterest and Google Lens.
NOW FOR THE PRESENTATION DETAILS
The presentation itself was titled ‘Get The Look: Improve The Shopper Experience With Visual Search Optimisation’. Joelle discussed how image optimisation can improve the overall customer experience and play a role in discoverability, product evaluation and purchase decisions for online shoppers.
The 3 tactics she revealed in her presentation are listed below:
- Leverage existing platforms and partnerships
- Capitilise on impulse buying
- Create a compelling shopping experience
In addition, this blog post discusses how to optimise your images to enhance VS for your users.
STARTING THE PRESENTATION – VISUAL SEARCH
Joelle started the presentation with a personal story of her stumbling across a pair of shoes online.
Her story described a very common online shopping behaviour, beginning with search intent. Her search intent began with a pair of black shoes that lead to:
“Women’s black patent leather brogue shoes with black and white platform sole”
Essential she broke down the elements of the shoe followed by a series of relevant search queries.
From her search journey, she concluded that although Google Images may have provided some relevant results to her search, Google Shopping produced even better results.
Joelle explained that VS works best under these circumstances as they are engineered to provide users a more refined search journey, making it easier if users’ search queries are typically too long and complicated as mentioned in her example.
She continued to stress how VS is a more innovative, sophisticated, practical and ‘way more user-friendly’ approach when address users search intent.
IMAGE SEARCH V.S. VISUAL SEARCH
Joelle then continued to highlight the difference between Image Search and VS. She explained that image search involves typing in a query into a search engine and the results from that query will be an image.
However, with VS, users are required to enter an image as a query and the search engine will provide users with more visual options.
Joelle described this process as the ‘Shazam for Images’.
Joelle stressed that although VS is not widely used, there are more user expectations with this search method. She shared with the audience statistical figure from The Intend Lab, that 85% of consumers place more importance on visuals than on text when searching online for clothing or furniture.
She continued to draw more statistical figures from ViSense, announcing that 62% of young people would like to have the ability to search by image and over 50% of them would like to be able to click these images to make a purchase.
With excitement, she then began to discuss the wonders of Pinterest and Google Lens.
Pinterest is a visual discovery engine for finding ideas like recipes, home and style inspiration and many more.
Joelle shared Pinterest’s recently announced new feature, the Hybrid Search. This is Pinterest approach to include search intent into their discovery experience.
This feature is a hybrid between traditional text search and image search. This allows users to perform a search to include the aesthetics of an existing pin. With a re-engineered VS algorithm, this feature will provide visual results based on the ‘look and feel’ of the pin submitted
For example, the story of the colourful chair.
When searching this colourful chair – Mondrian chair in Pinterest. Users will find similar visual content as seen in the image below:
With the new hybrid search feature, users will find all sorts of decor that match the aesthetics of this chair. To do this simply, input the pin of the pin of this colourful char and with text queries such as bookshelves, walls and even clock, users will find the following results:
This feature broadens the visual results to go beyond that of the basic subject matter of the image by extracting numerous elements within the image and pairing queries that users may have.
Another Pinterest feature that Joelle had passionately spoke about was Pinterest’s Complete The Look feature. This new feature leverages rich scene context to recommend visually compatible results in fashion and home decor pins. This new feature takes into account numerous visual elements such as body-type, seasons, pieces of furniture to produce ‘taste-based’ recommendations.
For example, this feature will be able to identify elements of an image to provide users with an array of visual results that can be detected within the original image as seen in the image below:
Pinterest refers this function as a visual complement system. Essentially this feature will recommend results that complement, or as Joelle has described it, ‘go well with’ a query image.
Google Lens was launched in 2017 and now identifies over 1 billion items in their visual search algorithm – four times as many as from when they once started off with. Searchers can use Google Lens directly within Google Image search on mobile for some images.
This visual search tool help people learn more about images by providing a variety of similar visual results, as seen in the image below:
But it doesn’t stop there, Joelle shared new features that has recently been announced in Google I/O conference in May 2019
These features include:
- Translate: Point your camera at text and Google Lens will automatically detect the language and overlay the translation right on top of the original words, in more than 100 languages.
- Text: Text is all around us. Copy and paste text from the real world—like recipes, gift card codes, or Wi-Fi passwords—to your phone.
- Auto: Google Lens will automatically provide relevant results based on what you are pointing your camera at.
- Shopping: Point your camera at clothes, furniture, or home decor to see similar items you can choose from. Alternatively Google Lens can scan a barcode to see that exact item.
- Dining: Google Lens can automatically highlight which dishes are popular—right on the physical menu. When you tap on a dish, you can see what it actually looks like and what people are saying about it. This data is drawn by photos and reviews from Google Maps..
JUMPING INTO THE TACTICS: VISUAL SEARCH TACTIC #1 – LEVERAGE EXISTING PLATFORMS & PARTNERSHIPS
Joelle’s first tactic was to leverage existing platforms and partnerships. She explained that if the technology exists, why not use it?
By optimising content for your target audience on platforms such as Pinterest and Instagram you find opportunities for users to convert. In addition to this, she suggested partnering up with companies or industry leaders that are already incorporating VS in the systems such as ViSenze.
For example, Levi had partnered with Pinterest to introduce ‘Styled by Levi’, as seen in the image below:
This tool offers a first-in-market, personalised styling experience for Levi’s customers. This feed is based on a visual questionnaire and style insights from users Pinterest activities.
The result of this collaboration provides users a Pinterest board with a variety of recommended Levi products that will complement the user’s style. For instance, this tool may suggest a pair of jeans tailored to your preferences, an editorial image to spark ideas for how to wear them, or a link to the customisation page on Levi.com so you can make them your very own.
VISUAL SEARCH TACTIC #2 – CAPITALISE ON IMPULSE BUYING
The next tactic Joelle discussed was to capitalise on the impulse buying behaviour. Although with much guilt, Joelle explained that this behaviour can be largely beneficial for business online.
With a recent study from GfK (Market Research Company), Joelle said that 72% of people found that Pinterest inspires them to shop when they are not looking to buy. She argued that this behaviour provides business opportunities to display their products and services, and high suggest business take advantage of it!
VISUAL SEARCH TACTIC #3 – CREATING A COMPELLING BUYING EXPERIENCE
The third and final tactic discussed was creating a compelling shopping experience both online and in-store. Joelle suggested that developing an in-store experience for your customers can have a significant impact on a customer’s buying behaviour, and that this concept can be easily transferred where possible, digitally – on your website or on numerous third party platforms.
For example, Amazon has patented a blended-reality mirror that lets customers try on clothes virtually while placing you into a virtual location as seen in the image below:
The patent describes the mirror as partially-reflective and partially-transmissive. With the use of a variety of displays, cameras, and projectors to create this ‘illusive’ blended image.
This mirror works by scanning the environment to generate a virtual model, and then identifies the face and eyes of the user to determine which objects are to be seen as a reflection. Once this process is completed, the virtual clothes and scene are transmitted through the mirror to create the blended-reality result.
This was one of many examples Joelle shared with the audience, and although she had not provided any explicit methodology to optimise VS, her message was loud and clear – VS can innovatively play a significant role in creating a compelling buying experience for users online, we will dive more into this later.
HOW TO OPTIMISE VISUAL SEARCH
Joelle continued to discuss how we can optimise our images for visual search.
Here are 6 quick tips on how to optimise your images.
- Conduct standard image optimisation
If you are familiar with SEO, you will most likely have equipped under your ‘skill-belt’, the fundamentals on how to optimise an image. This includes creating alt tags, title tags, meta description, captions, file names and URLs with the inclusion of your target keyword(s).
In addition to this, you will also have to take into account image size and responsiveness over multiple devices and screen sizes. All of which will play a significant role in search engines and users determine the relevance of your image.
- Create an image sitemap
As previously mentioned, the image optimisation should consider how search engines and users determine relevance in the visual content you provide. Firstly, we recommend creating an image sitemaps for search engines to index. This sitemap should concisely list all the images that you would like the search engines to crawl and index so they may display them in the Search Engine Results Pages (SERPs).
You can easily create an image sitemap online on https://www.xml-sitemaps.com/images-sitemap.html
- Develop product data
Product data is all the information about a product which can be read, measured and structured into a usable format.
For example, basic product data from the shopping feeds includes brand name, product number, supplier, cost & price, measurements and so much more. They can help visual search systems understand the contents of your images.
Product data allow users to find accurate product information in an efficient manner. By ensuring a Product Data Management (PDM) system is incorporated into your image optimisation activities, you will find that this will provide users with additional and useful information regarding a specific product. Consequently, potentially influencing the impulsive buying behaviour Joelle had mentioned previously.
- Sync data with visual platforms
Moving forward from ensuring you have the product data, Joelle suggested syncing your product data with visual platforms such as Pinterest can increase brand exposure and furthermore increase conversion through these platforms.
For example, she proposed you use your Google Ads account to sync product data directly from your Google Merchant Centre straight into Pinterest catalogue.
This will provide users with VS results in Pinterest that contain accurate product information and consequently will furthermore entice users to continue their VS journey that may lead to a purchase.
- Incorporate structured data
Structured data is information that has been organised into a formatted repository, allowing search engines such as Google to effectively process and analyse information.
When it comes to optimising your image, Joelle had suggested focusing on the following elements:
- Subject Matter: the product or person that the image entails
- Category: the subject field or in this case the image gallery
- Context: the objective of the image including relevant information such as offers or sales
- Enable ‘rich pins’
Lastly, as Joelle continued to expressed her interest for Pinterest, she recommended enabling the ‘rich pins’ function on this visual engine platform.
Joelle explained, by enabling this feature you can enhance the conversion funnel for Pinterest leads, or as she described is as ‘improving the pin to purchase process’
The underlying message behind Joelle’s last tip resonates with her previous tactic – Create a compelling experience. By developing an ‘in-store’ experience for your customers, in this case Pinterest leads, we strongly believed that this feature will create a compelling experience for your users that will in-turn have a significant impact on your users’ buying behaviour.
FINAL TIP FOR VISUAL CONTENT – BEYOND THE TECHNICAL
Joelle’s concluded her presentation with these final 7 tips to what a product image should contain.
1. Product images should be consistent with the brand, following an overall theme and format that ensure viewers are aware a product image’s origin.
2. They should be clutter-free to ensure viewers are not distracted by various other elements that are present in the product images.
3. To reinforce, the previous tip, Joelle highly recommends that a product should have clear focal point to draw viewers’ attention.
4. It is beneficial for product images to contain context to facilitate viewers perspective when analysing the product image.
5. In the off chance you are using stock images; customised your stock images and ensure the images you have selected have not been overused. This prevents viewers inviting preconceived notions to your product images and distorting the objective of product images.
6. Provide multiple angles for each product images to facilitate viewers to properly evaluate your product images. Joelle suggested that if the product images provide no value, remove them.
7. Lastly, keep up with the trends. Joelle strongly suggested your product images should incorporate innovative approaches to ensure they are widely perceived as aesthetically pleasing and stimulating to gaze upon. She suggested that these images should take into account your viewer’s emotions to further consolidate the relevancy.
WHAT CAN WE TAKE FROM THIS PRESENTATION?
As VS continues to evolve, Joelle’s presentation stresses the endless opportunities SEOs, digital marketing professionals and businesses can take to ensure they stay ahead in their industry. She has provided us with 3 highly insightful VS tactics for us to consider. Furthermore, she has taken the privilege to supply the audience with 6 quick tips that we can implement to our image optimisation activities as well as 7 helpful considerations to think about when publishing our product images.