For the second straight month Bing saw its U.S. search engine market share on the rise. Meanwhile, Google held steady, duplicating its record share of the search market, while Yahoo held steady after 10 months of declines, comScore reported.
|comScore Explicit Core Search Share Report*
July 2012 vs. June 2012
Total U.S. – Home & Work Locations
Source: comScore qSearch
|Core Search Entity
||Explicit Core Search Share (%)
|Total Explicit Core Search
Google’s dominant share of the U.S. search engine market remained at 66.8 percent in July. Google attained its record 66.8 percent market share for the first time in June. Google had a 65.1 percent of the search market in July 2011.
Bing grew for the second straight month, upping its market share from 15.6 percent in June to 15.7 percent in July. Bing was at 14.4 percent in July 2011.
For Yahoo, there was good news: Yahoo search (which is powered by Bing) didn’t lose any market share for the first time in 10 months, holding steady at 13 percent. The bad news? Last July, Yahoo’s search share was a much healthier 16.1 percent.
Ask saw slight gains, growing from 3 percent in June to 3.1 percent in July. Ask was at 2.9 percent in July 2011. Meanwhile, AOL remained unchanged month over month and year over year, at 1.5 percent.
From June to July, Google- and Bing-powered organic searches remained unchanged, at 69 percent and 25.6 percent, respectively.
“Explicit core” searches grew 2 percent – from 17.1 billion in June to 17.7 percent in July. Google led the way with 11.8 billion searches (up from 11.4 billion in June); second-place Bing accounted for 2.7 billion (up from 2.6 billion in June); Yahoo was third at 2.2 billion (unchanged); Ask was fourth with 548 million searches (up from 516 million); and AOL came in fifth with 264 million searches (down from 265 million).
As reports circulate Google is about to enter into a record privacy settlement with the FTC, just how bad is Google’s privacy record compared to other major tech companies?
The Wall Street Journal (subscription required) reports that Internet giant Google is on the verge of agreeing to a $22.5 million settlement with the FTC to put to rest charges that it violated iOS users’ privacy by intentionally bypassing the built-in privacy controls in Apple’s Safari Web browser so Google could track their browsing habits. If the settlement lays out as reported, it would represent the single largest penalty ever assessed against a single company by the Federal Trade Commission. Even though $22.5 million barely represents half a day’s income to Google, it’s probably not a achievement Google will memorialize with a bronze plaque outside its Mountain View headquarters.
This isn’t the first time Google has run afoul of the FTC over user privacy concerns. What’s the basis of the current case and how does it compare to Google’s privacy record with U.S. regulators? And does Google even stand out amongst tech companies taken to task by the FTC over privacy issues?
Google, Safari, and the FTC
The current case being investigated by the FTC surrounds Apple’s Safari Web browser, both in iOS devices like the iPhone and iPad as well as Apple’s desktop Mac OS X operating system. Since Safari debuted as a desktop browser all the way back in 2003, it has had a default setting to block third party cookies — it also featured a “privacy reset” option for clearing cookies and other browser settings. Safari 2.0 (from 2005) was the first to enable a “private browsing” mode — many ridiculed it as a way for Mac user to surf porn sites, but it also offered effective protection against first- and third-party cookies as well as being tracked by (many still-nascent) advertising networks.
As Google became a major force in online advertising — in part through acquisitions like Doubeclick and AdMob — Google wanted a way to serve personalized ad content and things like its “+1″ buttons to signed-in Google users. It did so using a post-back mechanism that enabled it to set cookies in the Safari browser even if the browser was set to disallow third-party cookies. (Stanford grad student Jonathan Mayer analyzed technical details of the mechanism.) One could argue that Google was only able to do this because of a flaw in Safari, but Google did more with the technique than just determine if users were signed in to Google and had agreed to receive personalized advertising: the technique also let Google install tracking cookies. So, even if users were blocking third party cookies in Safari (the default) and were not signed in to Google, Google could still track their actions through not just Google’s own sites, but any sites that carried Google advertising or services. Given the near-ubiquity of things like YouTube and Google’s AdSense advertising services, that’s a major chunk of the Internet.
Google has maintained it did nothing wrong, and began deleting the tracking cookies as soon as it became aware they were being set. It characterized the bypass technique as “known Safari functionality,” said it was deleting any data it gathered as a result of the cookies and that no harm was done to consumers. However, Google did collect information about all Safari users it encountered, regardless of whether they had a Google account, were signed in to it, or had agreed to accept social advertising; however, there is no indication Google shared that information with other companies. Nonetheless, Google may well have profited from knowing more about Safari users’ browsing habits than its competitors.
The FTC isn’t alone investigating these issues: several states’ attorneys general have launched their own probes, and European regulators are also investigating Google’s bypassing of Safari’s built-in privacy tools.
The Safari situation puts Google in hot water because the company had previously entered into a 20-year consent decree in 2011 for “deceptive privacy practices” surrounding the launch of Google Buzz. In that case, Google escaped having to pay any fines, but it did agree to implement a comprehensive privacy program, and subject itself to regular independent privacy audits for 20 years.
Google Buzz, for folks who don’t recall, was Google’s initial ill-fated effort to leverage its widely used Gmail service into a social networking platform. To launch the service, Google enrolled Gmail users in aspects of Google Buzz without their consent, which resulted in details of users’ contacts and correspondents automatically being disclosed to other users — in some cases even if they declined to try out Google Buzz. By the end of the year, Google had killed off Google Buzz and switched its focus to Google+, but the damage was done: Google had not only flubbed its first serious move into social networking, it had brought down 20 years of federal scrutiny about its privacy practices too.
As a result of the Buzz fiasco, Google can be liable for up to $16,000 per day that it violates its consent agreement with the FTC. If the $22.5 million figure cited by the Wall Street Journal is accurate and the $16,000-per-day fine is the basis for the penalty, that could mean Google would essentially admit it was tracking using Safari users without their consent for the better part of four years.
What about everyone else?
A number of federal agencies monitor aspects of many Internet companies’ businesses. Google doesn’t just tangle with the FTC. Just a few months ago the Federal Communications Commission fined Google a paltry $25,000 for collecting personal information with its Street View vehicles as it cruised by Wi-Fi hotspots. However, although it’s a small agency, the Federal Trade Commission is primarily responsible for consumer protection. How have other Internet giants fared with the FTC?
Not so well, as it turns out. Perhaps the most public settlement with the FTC over privacy issues was from social networking giant Facebook: the FTC accused Facebook of failing to keep a number of privacy-related promises it made to users, including making formerly-private information public, sharing data with third parties without user consent, keeping data around and accessible even after accounts were deleted, and falsely claiming it complied with the U.S.-EU Safe Harbor Framework for data transfer. For all that and more, however, Facebook paid no penalties — but it did agree to the same 20 years of independent, third-party privacy audits later applied to Google.
Social networking aggregator Spokeo also had to settle with the FTC — and it didn’t get off for free, agreeing to pay $800,000 to settle charges it violated the Fair Credit Reporting Act as well as “astroturfing” by posting false endorsements of its services to blogs and Web sites. However, unlike Google and Facebook, Spokeo isn’t a primarily consumer-facing service. Rather, it collects and aggregates information about individuals from social networking sites and the Internet, bundles it up, and sells it to recruiters, background screeners, and human resources departments — if you’ve ever had a foul-mouthed tweet or drunken Facebook photo come back to haunt you during a job interview, Spokeo may be why. The FTC alleged, among other things, that Spokeo failed to comply with requirements governing consumer reporting agencies.
What about social networking sites? Believe it or not, in May MySpace had to work a settlement with the FTC for sharing personal information with third parties without user consent. Sound similar to Facebook? It does: and, like Facebook, MySpace didn’t have to pay a penny, but did have to agree to having its privacy practices audited for the next 20 years.
Twitter hasn’t emerged unscathed either — although the circumstances are different. Twitter agreed to have its security and privacy practices audited for 20 years as a result of two security breaches in January and May of 2009 during which attackers were able to get administrative access to Twitter — including accessing private information and the ability to generate phony tweets. In these instances, Twitter didn’t promise one thing and do another — it promised users privacy and wound up getting hacked. Something similar happened with game site Rock You, from which hackers managed to glean some 32 million email addresses during an attack. However, Rock You also wound up agreeing to pay $250,000 in penalties because it also collected personal information from nearly 180,000 children without their parents’ consent, in violation of the Child Online Privacy Protection Act (COPPA), which bars the collection or sharing of children’s information online without their parents’ consent.
COPPA has been at the core of settlements the FTC has reached with many technology companies, including Broken Thumbs Apps, Skidekids, and Xanga.com. The Xanga case (from 2006) involved the highest fine ever levied for a COPPA violation: $1 million. Xanga knowingly collecting and disclosing information about 1.7 million children age 13 and under without parents’ consent over a period of five years.
Even Microsoft has run afoul of COPPA. Back in 2002 the company reached a settlement with the FTC that its Passport single sign-in and wallet service was designed to let users easily and safely make purchased from participating merchants, and even set up accounts for kids that limited collection of personal information by participating sites; among other things, Microsoft was found to have misrepresented what information was shared with third parties about children.
It happens all the time… your website finally secures a starring role on the first page of a Google SERP. And then…
Google “slaps” your website, sending it into virtual purgatory (SERP page 1,398,530 or beyond) — effectively flushing your web-based income down the toilet.
Google’s infamous slaps strike without warning, penalizing websites that somehow offend their never fully disclosed notion of “correct and proper” SEO.
But now, Google is giving advanced warning that it intends to slap, believe it or not — SEO itself! SEO, of course, is the art and pseudo-science of intuiting Google’s rules so that your website, in a perfect world, appears and stays on the first page of a Google SERP.
But the world is far from perfect; indeed, it is ineffable, and Google prefers it that way. Because Google lives in constant fear that bands of ingenious little techno-nerds and black-hat bandits will hijack their search algorithms, and “game” their system — bringing down their galactic cyber-cash cow, like Visigoths sacking ancient Rome — not only do they never fully explain their rules, they keep changing them! So, at best, SEO has always been a gamble… a guessing game.
Their most recent algorithm change was PANDA, which penalized websites for, among other things, too many low-quality ads or links above the fold, and for poor quality traffic over all.
And now, here comes…
The Newest Google Slap
So new, in fact, this Google slap doesn’t even have a name, nor has it been activated yet. But it will be, says the man in charge, Matt Cutts.
Matt Cutts, you see, is the head of Google’s Webspam team, and he leaked a bit of info recently at Austin’s SXSW convention that has sent web-marketers and SEO professionals into a virtual tailspin.
“…We don’t normally pre-announce changes but there’s something we’ve been working on over the last few months and hope to release it in the next few months or few weeks. All those people doing, for lack of a better word, over optimization or over SEO — versus those creating great content and a fantastic website — we’re going to level the playing field. We are trying to make GoogleBot smarter, make our relevance more adaptive, and we are also looking for those who abuse it, like using too many keywords on a page, or exchange way too many links, or go well beyond what you normally expect. We have several engineers on my team working on this right now.”
No doubt, the question you’re now asking yourself is:
How much is too much SEO?
Indeed, what is over-optimizing, or over-SEO-ing? Well, you can bet your top page ranking that Google isn’t going to tell you any more than what Matt said above. So don’t bother trying to micro-analyze his statement or guess how many keywords or links are too many on any given webpage. Google’s algorithms are probably the world’s best-kept secrets. Governments would pay dearly (and probably are) to learn how Google keeps their cyber-vaults hacker-proof.
So, unless you can somehow mind-meld with Matt Cutts’ brain… you’ll just have to…
Create content that appeals to people, not bots.
Hardly a revolutionary idea. In fact, this “idea” has been promulgated ever since Internet marketers stopped living in the world of flesh and blood and chose to live and market in the cold, black, binary world of cyberspace.
So what’s the answer then to the question: How much SEO is too much SEO, or more to the point, what is to become of content marketing as currently practiced?
The answer is revealed when you…
Stop worshipping Google.
Look, when it comes to content marketing, so many companies today are hiring anyone who can tap, tap, tap on a keyboard and conjure up articles stuffed, to whatever degree, with keywords. Yet, these articles have so little actual value or use to readers — indeed, they’re not intended for human eyes — and these companies state this, unabashedly. These articles are written instead for Google’s bots. In fact, when advertising for writers, these companies will state, unequivocally, they’re looking for “SEO writers” — that is, anyone experienced with keyword research who can strategically insert keywords into a 750-word article. The actual “writing” of these articles is only incidental to the job. No real writing talent or ability is required, because there’s no need to connect, on any level, emotionally or intellectually, with a human being.
Could this slap then be the final fatal blow to content marketing?
No doubt, you’ve read these types of articles yourself (or published them). They’re innocuous, banal, and often created by unemployed housewives with no experience with, or intrinsic knowledge of, the subject at hand, or by offshore content factories, where English is a second language and the price and speed of delivery is their main value proposition. This is the type of content-marketing abuse Google is looking to stop.
To its credit, Google’s aspiration, vis-a-vis SEO, is to provide targeted, and most of all, valuable, actionable, qualitatively superior content to those searching for it. To that end, Google has upped the ante — penalizing those who attempt to game their system, tricking it into rewarding their websites with a higher SERP placement, which would otherwise be given to websites that serve searchers better and more honestly.
Google, on Tuesday, was awarded a patent for “advertising based on environmental conditions.” In other words, Google has patented the technique of using environmental factors gathered through a device’s sensors to target ads at users.
“When determining what ads to serve to end users, the environmental factors can be used independently or in combination with matching of keywords associated with the advertisements and keywords in user search queries,” the patent reads.
“A web browser or search engine located at the user’s site may obtain information on the environment (e.g., temperature, humidity, light, sound, air composition) from sensors. Advertisers may specify that the ads are shown to users whose environmental conditions meet certain criteria.”
So Google can now deliver targeted ads to users based on their surrounding environment. For example, the patent notes, temperature information gathered by a phone’s sensors can be used to flash ads for air conditioners (if temperatures exceed a certain thresholds), or winter coats (if the temperatures fall below a certain benchmark).
Sensor info isn’t the only environmental information Google wants to analyze with the patent. Google also wants to analyze background information:
“An audio signal that includes a voice instruction from a user of the remote device can be received, and the environmental condition can be determined based on background sounds in the audio signal,” the patent reads.
In other words, if you’re at a sports event and you call GOOG-411 for info about a nearby restaurant, Google will be able to identify the sporting event based on background noise heard through the handset’s microphone, and ads related to fans of that sport will be pumped to your phone.
This week Google started rolling outGoogle Search Plus Your World, which — besides being the worst case of bad branding in a long time — will cause Google a lot of problems. Searchers will go elsewhere and governments will complain. Here is why.
The idea behind Google Search Plus Your World (let’s call it +World, shall we?) is good.
Personalized and social search
Google has presented personalized search results for a long time, using data from your Google GMail account (if you have one) and your web history. Google has been using these data to build you a kind of personality or interest profile, making it easier for them to deliver search results that are of interest to you personally.
If you are a computer geek, searches for “apple” are therefore more likely to bring up results on the computer company, rather than the fruit or the music company of the Beatles.
Google has also tried to enrich search results with real time data frome the social web. For at time it did, for instance, include twitter messages (tweets), which devlivered information about what is happening right now. This was definitely a good idea.
+World is an attempt to combine the two and add personalized social data to the search engine results. That should be a recipe for success. Instead we believe Google is facing a PR disaster. You see, the implementation of +World is bad, very bad.
Twitter and Facebook not on board
It is not all Google’s fault. The fact that Twitter and Google could not reach an agreement on the delivery of tweets to search results, makes it hard for Google to add Twitter data. Twitter has actually added a rel=”nofollow” tag to links in tweets, efficiently forbidding Google to follow and use those links in its algorithm.
Much of Facebook is also off limit for Google. In other words: There are strict limits to what kind of data Google can fetch from the two most popular social services in the world.
There is nothing to stop them from linking to personal profiles on Facebook and Twitter, though.
Instead Google has decided to make use of its own social network, Google+. If you have a Google+ account, you will now find a lot of links to Google+ posts and pages in your search results. (If you don’t, your soon will +World is being rolled out, but is not yet fully launched.)
When I search for “search engine marketing”, for instance, one of the top search results is a link I posted to a Pandia Wrap-up in December, which is in no way useful. Then there are a lot of links to other Google+ posts that may or may not be relevant to my search.
Phil Bailey and Danny Sullivan correctly point out that this favorisation of Google+ even appears if you turn of personalized search/+World. Instead of much more relevant links to content rich twitter and Facebook pages, you get links to less useful Google+ pages.
By adding several Google+ links to the search engine result pages, Google is in effect deluting the quality of search results, making the search engine less attractive for searchers. As Gizmodo put it: Google Just Made Bing the Best Search Engine
This past weekend, I had a friend call call me to talk about his latest SEO scheme. It was this topic specifically, he had paid five dollars for a back link program from .edu domains. I had advised him that paying for back links from a website that had nothing to do with his business wouldn’t benefit him very much after Google’s 30 pack update on January 5th. Google has become very wise to back links and the spamming that is associated with them.
On the heals of that conversation Search Engine Watch posted the following article.
The process has already started, and as a publisher you need to make sure you are adapting your marketing strategy to line up, or get left behind.
Google made the link building algorithm popular in the late 1990s and early 2000s. It was a revolution in its time because it provided search engines with a method for identifying the most important web pages for a given topic. However, as has been well documented, spammers have assaulted the algorithm with a wide variety of methods for buying links or creating them in other ways that don’t work for the algorithms.
Even if you generate all your links in a pure white hat way, through reaching out to site owners and requesting them without compensation, or are doing high quality guest posts, you aren’t necessarily generating the best possible signal for search engines. Certainly this type of link building done properly would not be a violation of the Webmaster Guidelines, but from the perspective of the search engines it also doesn’t represent a groundswell of opinion raving about your product. It still means something, but it is brute force driven through your efforts, rather than resulting from the enthusiasm of your audience.
I don’t believe that search engines will penalize people who link build this way, but I think they will value the link profile that is manually built less than one that obtains unsolicited endorsements from the web.
Prior to the emergence of Google, links weren’t a ranking factor in a significant search engine. At that time, any unpaid links were implemented solely based on merit, because the publisher had no other reason to link to someone else’s page. Even paid ads were based on the advertiser valuing the traffic from the target site enough to be willing to pay for it, since there was no other benefit – so these too went to highly relevant pages as a rule.
Short and simple: links were a better quality signal when the world didn’t know that they were a signal. But, those days are gone.
What the Search Engines are Doing
The search engines are constantly in search of additional signals to help provide better data on the best results to return for a given query, and to make it harder for spammers to succeed in ranking lower quality sites (lower quality than others that are available on the web). The increase in the use of social signals by the engines has been a part of that effort.
However, social signals are relatively noisy. As I documented in “Social Signals and SEO: Focus on Authority,” the number of people on the major social sites that are actively recommending sites/content is still a relatively small percentage of the population.
That same article also documented how using social media’s “wisdom of the crowd” (showing the most liked articles) was something that Bing tried, but then later removed. I believe that this happened because using social media mentions as votes in the same way that links were used did not really work, even in the limited fashion that Bing tried it.
I expect that for many categories of searches search engines will weight sites that show multiple types of signals more than those that show only one. Back in July I wrote about “The Dangers of a One Dimensional Link Building Plan.” However, in addition to not doing one type of link building, you should also be careful to not use old-fashioned link building as your only method for promoting the site. Find a way to get the web to generate other signals about what you are doing!
The first key is to focus on where your audience is (what sites they visit, what videos they watch, whose columns they read, …). Think like a pre-Internet marketer would when trying to decide how to spend their ad dollars. Ranking signals can be generated by both your potential customers and the publishers of the content on the web that they visit.
Potential customers can create signals by:
- Talking about you in social media.
- Visiting your site.
- Searching on your brand name.
- Doing a search for products or services like yours and clicking on your search result.
- Discussing you in comments on blogs or forums.
There are a lot more methods than these few!
Publishers of the content that your audience consumes can generate signals as well, in the form of good old-fashioned links. So what are the ways to encourage the generation these types of signals?
As per my recent columns, you should certainly focus on authority, and seek to become an authority. Even if you aren’t yet an authority yourself, you can do things to get your name out there to start getting exposure to authorities and to build visibility with others. Here are a few specific ideas on how you can do that:
- Start a blog: But only do this if you can produce unique, high quality content on a regular basis. It is a real time commitment. However, don’t emphasize volume over quality. Two great articles a month will do far more for you than 4 decent ones a month, or 10 crappy ones
- Start a social media campaign: Become an active community member. Read the Become an Authority article for more tips on how to do that effectively. Note that it is better to execute extremely well at one social media site than it is to do an OK job in several.
- Participate in communities: If you can’t start a blog or drive a highly active social media campaign, you can still participate in communities. Comment on blogs, forums, videos, or whatever medium your potential customers consume. In other words, as a fallback to Becoming an Authority, work at becoming known. Drive interactions that take place in front of your target audience. Go to conferences and engage in dialogues. Be the person that asks a great question of one of the speakers during the Q&A.
- Generate press releases: Issue press releases from time to time, but only when there is something worth talking about on the web.
- Generate news: Do something newsworthy that someone else would be interested in writing about.
- Advertise on web sites where your target audience goes: Not for the purposes of buying links, but for exposure to your target audience, and to the people that publish content that your audience consumes.
- Advertise in search engines: More great exposure!
- Advertise on Facebook: For the same reasons, but only use this one if you can reach your potential customers here
Regardless of where you are in the process of building your own authority, do some things to attract positive attention to your website. Participating in discussions online is a great place to start. Participating in offline discussions that you can use to help drive online interactions is also a great thing to do.
The key is to create great signals in addition to the links that your site attracts.
If you haven’t discovered the plus one button as of yet…you are missing out. Being able to have your favorite sites in the top of the search results (if it applies), and if your a business owner…Another great way of showing up high on SERPs.
In a move that could raise some charges of anti-competitive behavior, Googlehas begun integrating Google+ brand page information in primary search results.
The inclusion, noted by researcher BrightEdge, appears only for some brands at the moment. BrightEdge, which has tracked Google+ brand pages since they went live on Nov. 7, just noticed the Google+ integration on Dec. 20.(Though Search Engine Land discovered it last month.) In particular, the company identified the following Google+ brand page results in a search for AT&T:
As Brad Mattick, VP-marketing for BrightEdge notes, the addition of G+ brand pages in this case allows the marketer to wedge in a promotional message. In this particular case, a call for a sweepstakes gets a much bigger audience via Google natural search results than it would have otherwise.
Though AT&T appeared to be one of the first brands to get such treatment, a search for Toyota showed two Google+ entries (from late November).
Other brands, including T-Mobile and Macy’s, also displayed G+ results in their searches. A Google rep offered the following statement about the search results: “Content from the +Page, such as recent posts, will appear as annotations attached to its associated web page under the sitelinks in search results if that site is eligible for Direct Connect. It uses the same bi-directional link and algorithmic criteria as Direct Connect.”
For Mattick, integrating G+ brand page information into search results is an obvious enticement for brands to join and be active on Google+. Mattick says he believes blurring the lines between G+ and search results parallels Microsoft’s inclusion of the Internet Explorer browser in its Windows OS in the 1990s. The U.S. Department of Justice accused Microsoft of using its Windows near-monopoly to beat Netscape in the browser segment.
Forbes posted another great article on Social Media Trends. I thought it was a must read for business looking at embracing social media in their advertising programs this year.
2012 is primed to be the year of social. In particular we can anticipate a blitz of publicity around social business. But social media too still has room to surprise. Talking with a group of people recently including Lloyd Armbrust at OwnLocal and Tom Smith of Global Web Index(and reading his blog) I picked out four megatrends that will shape social as it truly comes of age.
The growth of the transmitter ecosystem
Facebook, Twitter, Google have brought many more people into the online conversation. They’ve pretty much minced the barriers to creating online content – which is also good news for brands that are smart enough not to throw too much money into too many channels.
But another part of the story is that more channels create a larger need for content. Many millions of those people now active online are not, however, content producers. They are sharers and curators.
We have a content discovery challenge and we have curators to manage it. The importance of their role is on the rise.
But does this mean we are migrating from a peer-to-peer conversational network, to a more top down one, where we become increasingly dependent on those curators with large follower groups? Does that make Facebook, Twitter and Google Plus top down networks?
Tom thinks so but I have my doubts. Blogging too was very top down and I sense, by way of contrast, a strong peer culture in Google Plus.
Around the time Facebook became famous a well known blogger told me – why do I need Facebook? I know how to set up a website. The answer of course is that Facebook, then Twitter and now Google Plus provide you with the tools to communicate and the audience to talk with. Bloggers had to go out and find that audience and it was uphill for those who came even slightly late to it. There is no uphill in 2012 but there will be a growing role for the transmitter ecosystem.
The age of global
When American broadcaster ESPN wanted to extend its remit outside North America, it bought cricket blogging site cricinfo. So now a major US network is big in a sport that Americans don’t follow in a country half a world away.
One of the most telling examples of a new emerging global culture can be found in a sport. When website cricinfo set up initially it was a placid English affair. But cricinfo pioneered live blogging of cricket matches and began to make the web relevant to sports fans without national boundaries or national broadcasting rights getting in the way. The site eventually found a market in India where cricket is treated almost like a religion.
Separately, PlayUp is now building out the social network for global sports fans, more of which tomorrow. One of the beauties of cricinfo, and the same applies to all sports, is that reporters can follow and report on the tweets of celebrity sports people or tweet themselves from the training ground or nightclub. When English players misbehaved in New Zealand during the recent Rugby Word Cup it was global news immediately. A club bouncer uploaded CCTV footage to YouTube. Content is instant, continuous and pervasive. There is no reason why a national boundary or national broadcasting rights should exclude me from engagement.
In the start-up community even Silicon Valley start-ups now want to hire talent from wherever, as long as it’s the best. Nairobi and Instanbul are, along with numerous other cities, start-up hot spots attracting American and European interest. The start-up is suddenly a global culture.
There’s a new internationalism that segues with what is happening in the economy: more global, multi-polar, more equal – see this thread on Google Plus which discusses whether Google Plus is responding quickly enough to this desire to engage with global audiences. People care about this new globalism whether it arrives at their desk through sport or business or fashion or food. We need to work out how to become global online citizens.
The Justice Department on Friday gave the green light to Google’s $400 million acquisition of AdMeld, a major display advertising company.
The agency said the deal can proceed without any conditions, because a detailed analysis by antitrust lawyers found there are enough competitors that offer services similar to AdMeld, a company that helps online publishers sell their ads.
For the past two years, Google has been steadily acquiring dozens of companies in a wider swath of areas, drawing scrutiny from competitors, regulators and some consumer advocates who worry that the company is leveraging its dominance in online searches to expand into other businesses. The deals have run the gamut, with Google purchasing everything from restaurant review business Zagat to cellphone maker Motorola Mobility.
“Although the Antitrust Division concluded that this particular transaction was unlikely to cause consumer harm, the division will continue to be vigilant in the enforcement of the antitrust laws to protect competition in display and other forms of online advertising,” Justice said in a statement.
“We’re pleased that the U.S. Department of Justice has today cleared this deal,” said Neal Mohan, Google’s vice president of display advertising, in a company blog post.
On October 18th 2011, Google announced they would be encrypting search queries, for privacy reasons. Unfortunately, this disrupted organic keyword referral data, returning “(not provided)” for some organic traffic. This number increased in the weeks following the launch.
Most users used non-SSL Google for their searches. Now, according to Google, “…a web site accessed through organic search results on http://www.google.com (non-SSL) can see both that the user came from google.com and their search query… However, for organic search results on SSL search, a web site will only know that the user came from google.com.” The effects were obvious immediately.
In this emergency Whiteboard Friday provided on seomoz.com, Rand will go over the changes Google has made, why it happened (and why it really might have happened), and what you can do to stay calm and fight back.
Google’s statements on the topic are:
We’ve worked hard over the past few years to increase our services’ use of an encryption protocol called SSL, as well as encouraging the industry to adopt stronger security standards. For example, we made SSL the default setting in Gmail in January 2010 and introduced an encrypted search service located at https://encrypted.google.com four months later. Other prominent web companies have also added SSL support in recent months.
As search becomes an increasingly customized experience, we recognize the growing importance of protecting the personalized search results we deliver. As a result, we’re enhancing our default search experience for signed-in users. Over the next few weeks, many of you will find yourselves redirected to https://www.google.com (note the extra “s”) when you’re signed in to your Google Account. This change encrypts your search queries and Google’s results page. This is especially important when you’re using an unsecured Internet connection, such as a WiFi hotspot in an Internet cafe. You can also navigate to https://www.google.com directly if you’re signed out or if you don’t have a Google Account.
What does this mean for sites that receive clicks from Google search results? When you search from https://www.google.com, websites you visit from our organic search listings will still know that you came from Google, but won’t receive information about each individual query. They can also receive an aggregated list of the top 1,000 search queries that drove traffic to their site for each of the past 30 days through Google Webmaster Tools. This information helps webmasters keep more accurate statistics about their user traffic. If you choose to click on an ad appearing on our search results page, your browser will continue to send the relevant query over the network to enable advertisers to measure the effectiveness of their campaigns and to improve the ads and offers they present to you.