Author Archives: Daniel Page

Eight Tools for Effective Content Curation

(Attrib:Flickr/cambodia4kidsorg)

Content curation is the collection and sharing of interesting, informative, or entertaining content from within a particular niche. It’s a great way of establishing a reputation as an authority and gathering followers with a particular set of interests.

For businesses, content curation helps demonstrate expertise, is less expensive than content creation, and perhaps most importantly, contributes towards cultivating relationships with potential clients, customers, vendors, and partners.

In an age where social media and content marketing are blossoming, the sheer amount of content out there — of vastly variable quality — makes finding just the right material to share a potentially time-consuming endeavor. Continue reading

Buying Social Media Followers: Pros and Cons

Twitter 6x6

Twitter 6x6 (Photo credit: Steve Woolf)

Social media is an important part of modern marketing. Facebook has over a billion users; Twitter has become many people’s platform of choice for communication with companies; Pinterest has just become one of the thirty most visited sites on the web.

Businesses who have no social media presence are ignoring a powerful and cost-effective marketing and customer relationship resource, especially if they are selling or providing services online.

Unfortunately, establishing that presence and kick-starting engagement has a cost. It is difficult for a business new to social media to attract followers without a substantial investment of both time and money. For many businesses, that’s not an investment they are prepared to make, and so they are tempted to manufacture a follower count.

Follower count, and the ratio of followers to followed, is often viewed by potential customers, media, and marketers as a sign of success. Everyone likes to be popular, and if they can’t be popular they want to seem popular without putting in the necessary time and work to get there naturally. There are definitely benefits to manufacturing a follower count, but there are also numerous drawbacks. Continue reading

New Google Tool Lets Webmasters Disavow Unwanted Links

Cut the Links?

Since the Penguin algorithm changes, SEOs have been worried about a couple of things in particular. Firstly, they are concerned that all the illicit link-building tactics that their client’s previous SEOs engaged in (because, of course, they would never do such things themselves) are now going to have a negative consequence on a site’s ranking. The second major concern is that competitors can take advantage of Google’s scrutinizing of backlink profiles to deliberately create “bad links” and incur a penalty.

To allay some of those fears Google have released a tool that will allow webmasters to disavow those incoming links that might be causing their sites to be flagged by Google as engaging in bad link-building practices.

The Disavow Tool allows webmasters to upload a text file of domains and URLS that Google will then disregard, in much the same way that they disregard nofollow links. This is a somewhat out of character move for Google, who, as acolytes of the algorithm, prefer to rely on machine intelligence to winnow out the chaff. As usual, Google will take these link lists as a strong signal, rather than as an instruction that they are bound to follow. Continue reading

Resource Round-up: The SEO and Blogging Stories You Need to Read in October

Attrib:(Flickr:BeaGasteiz1)

October has been a interesting month for SEOs and webmasters. Google’s Disavow Links tool put the cat among the pigeons, and as ever, the Internet was abuzz with chatter about SEO, marketing, conversion rates, and design.

We’ve bravely ventured out into the wilds of the Web and brought back 20 of the most interesting articles we came across for your education and enjoyment.

WordPress and Blogging

Should You Worry About Your Site’s Layout?

(Source:Flickr/adactio)

That wily fox Matt Cutts once again set about the clucking SEOs this month with a Twitter announcement that Google were making changes to their algorithm focusing on page layout.

Google uses aspects of page layout as one of the signals that determines SERP ranking. They are especially concerned that, all else being equal, they don’t rank pages highly when the ‘above the fold’ portion of the page does not contain useful content for visitors. What that generally means is that they would rather web site owners didn’t fill the part of the page that first appears to visitors with adverts and bury the content further down the page.

Unfortunately, Google, with their usual lack of clarity, has failed to stipulate exactly what constitutes good content ‘above the fold’, but the common sense approach is usually best. Google tend to attempt to put themselves in the place of their users, and ask what is that user likely to find most useful. They may get that wrong fairly frequently, but absent any better data, this is probably the best approach for website owners too. Continue reading

Google Introduces Free Tag Management Service

Google have introduced a new tag management service that allows website owners to streamline the process of managing analytics, advertising, and conversion tags on their site.

Anyone working in online marketing will be familiar with the headaches involved in managing the snippets of code that need to be included in sites to provide the necessary metrics for tracking site performance. Often these tags have to be tweaked or added and removed fairly frequently, and coordination between marketers and webmasters is often not as seamless as it might be.

With Google’s new tag manager, web developers will be able to add one code snippet to a page, and then allow marketers to manage the rest from a dashboard. Google Tag Manager has the potential to significantly increase the responsiveness and flexibility of tracking a site’s analytics. The new service includes a number of features to streamline the process of adding and monitoring tags, including easy testing to ensure that tags added to a page are functioning as they should, version control so that users can roll back changes should they need to, and multi-account and user permission provisions, so that marketing agencies can manage the analytics and conversion tracking snippets on their client’s sites. Continue reading

Avoiding Duplicate Content with Canonical Links.

Buzzword Bingo: Duplicate Content

(Photo credit: planeta)

Duplicate content is a problem for search engines, and that makes it a problem for SEOs. Google  strongly dislikes including the same content more than once in search results, and if content exists in more than one place on a site, search engine algorithms have trouble determining which version they should include or exclude. They also have difficulty knowing where to assign link juice and authority. This confusion can lead to sites experiencing a loss of traffic and reduced SERP rankings. The “rel=canonical” element is intended to help search engines out by letting them know which page is “the page” when it comes to particular content.

Duplicate content from the perspective of search engine crawlers can be created in a couple of different ways. Sites can actually have the same content on two different pages of their site or they can have links that differ yet point to the same content — these both look the same to the crawlers. Continue reading

Resource Round-up: The SEO Stories You Need to Read in September

SEO

(Attrib:MoneyBlogNewz)

Keeping abreast of everything that’s happening in a vibrant and dynamic industries like SEO and inbound marketing is no easy task. So, this week, we’re pleased to offer our visitors the first of our monthly roundups of what you need to be reading in SEO, social media, hosting and web design.

We hope you learn as much from reading it as we did when we compiled it.

SEO and Inbound Marketing

Will the Go Daddy Outage Affect Your SEO?

1948 style US Route 503 highway marker

(Photo credit: Wikipedia)

Anyone who hasn’t been living under a rock in recent days will be aware that GoDaddy recently suffered an outage that left millions of its client’s websites inaccessible. While this sort of downtime will obviously affect a site’s traffic and therefore revenue, many people are asking whether it will have an effect on their SEO.

The short answer to that question is no, probably not. Google is generally not happy when they coming knocking and find that a site in their index has apparently disappeared, but they are also aware that problems occur, and so long as they don’t occur regularly then a site’s SERP ranking is unlikely to be degraded. Reliability is important, and a site that is regularly down will take a ranking hit, but a short period of unavailability is an anomaly, not a trend that is useful as a signal.

With that specific case dealt with, it might be useful to consider the more general case of downtime and how it should be handled. It’s occasionally necessary to take a site down for various reasons: making large scale changes to the software or server, for example. How are we to handle this?

Continue reading

What’s the Deal with Nofollow Links?

PageRank-hi-res-2

PageRank-hi-res-2 (Photo credit: Wikipedia)

There’s been a fair bit of chatter in recent months about the value of nofollow links for SEO. It’s been claimed that when assessing a site’s backlink profile, Google takes into account nofollow links as a signal of naturalness. Whether that’s true or not, it’s useful for any SEO to understand the nature of nofollow links, and especially whether nofollow backlinks are really worth pursuing.

Nofollow Explained

On occasion, sites would rather that Google did not follow particular links or allow them to affect the target’s PageRank. There are various reasons, but one of the main reasons is to avoid spam. For example, a common SEO tactic for gaining backlinks was to put thousands of links into the comment sections of blogs and on forums. Many blogs now choose to have all links in comments marked “nofollow” as a way of discouraging such spam. Because nofollow links are ignored for PageRank purposes, spammers now have no incentive to put their links on these pages. Sites like Wikipedia have all their outgoing links marked nofollow for exactly this reason, it discourages people from using Wikipedia’s authority to get PageRank, as Matt Cutts discusses in the video below.

Continue reading