AKA Marketing.com Logo            VISIT THE BLOG            

Blogged thoughts, is our web blog. Expect views, opinion, rants and tirades about everything and anything 

« Home / Forums »        

 


Subscribe to our SEO / IT related blog by entering your email address below

Blogged thoughts

| by the www.akamarketing.com team

Archive for August, 2007


Wordpress social bookmarking plugin

Friday, August 24th, 2007

Just installed a neat little plugin for Wordpress called Sociable which outputs image based social bookmarking links on various Wordpress pages to allow my blog readers (all two of them :-)) to quickly bookmark and share interesting posts. A tonne of social bookmarking sites are supported as shown by the following partial screenshot of the options interface, click the image to view a full size version.

 

All the supported Social Bookmarking sites
 

You’ll see that at the moment I’m outputting links to about 20 social bookmarking sites (out of a total of about 60) which is more than enough really. I’m really conscious of not having excess flair on my blog by having too many widgets and icons cluttering up the place. Jeff Atwood on his excellent Coding Horror blog lists excess flair as #4 on his list of thirteen blog clichés so I certainly want to avoid that, in saying that though I want to make sure I’m linking to all the most popular social bookmarking sites. Since I’m a complete social bookmarking notice perhaps you guys could tell me if I missed any of the big players in the SB market.

As usual with Wordpress plugins the install is a piece of cake which everyone should be able to handle, anyhow if your in the market for a social bookmarking plugin for Wordpress I definitely recommend you check Sociable out.


Spin 103.8 meta jacking

Friday, August 24th, 2007

Since I’m still suffering from last nights work social to get drunk with my now ex boss one last time before he left our ICT team and since I’m off to the fight tomorrow (no not in the Swiss Cottage - in the point depot!) and hopefully the big Dublin & Kerry game in Croker on Sunday I’m just going to take it ham & cheesy tonight and relax with a few beats and update my blog.

For this update and in true Richard Hearne style (see Unison & Continuum) I’m going to ‘out’ someone for not playing by the rules. The unfortunate ’so and so’ on this occasion is popular Dublin based radio station Spin 103.8 who appears to be conducting some good old fashioned meta jacking on their offical website located at http://www.spin1038.com/. Meta Jacking is the process of putting your competitors names in your meta keywords and or meta description tags in the hope that if someone searches for your competitors name your website would come up in the results too. Spin have done this by including the names of rival radio stations such as 98FM, FM104, Today FM and some others in their meta keywords tag, similar stuff is being done with their meta description tag too. Although I’m not a lawyer I believe I’m right when I say that this is a form of trademark infringement and could potentially result in a day in court for the Spin 103.8 legal eagles. The meta keywords tag as lifted directly from their home page is below:

  1. <meta content=“Spin 1038, Spin south west, Spinsouthwest, Spin, Dublin Radio, FM104, 98fm, Today FM, entertainment news” name=“description” />

I’m wondering who does their search engine optimisation because even putting aside the legal issues this sort of technique doesn’t even work. It may have worked somewhat years ago but these days meta tags aren’t given much value at all by the search engines so it’s actually a complete waste of time. Your thoughts are welcome, additionally I wouldn’t mind a couple of trackbacks :-)…

kick it on kick.ie


Blogging as a recruitment tool

Monday, August 20th, 2007

Everyone knows the benefits of using a blog and blogging for marketing reasons. Blogging is simply one of the best ways to get yourself or your company known as an expert in your industry and perhaps more importantly to get in touch with potential clients. Blogging as a recruitment tool though? This is something I’m currently pondering the merits of. I suppose it like other things has pros and cons, I’ve noticed that the iQ Content guys seem to be giving it a bash anyhow. They’re looking for a project manager and have already got some ‘press’ from many of Ireland’s leading bloggers. The following URLs show this:

http://www.mneylon.com/blog/archives/2007/08/14/project-manager-required-chez-iq-content/
http://www.mulley.net/2007/08/20/fluffy-links-monday-august-20th-2007/
http://www.redcardinal.ie/general/14-08-2007/iqcontent-project-manager/

I’m going to keep an eye on this to see how they get on but seemingly they already have had applications from people who learnt about the job on other peoples blogs. It’s an interesting approach and sure as hell beats paying sites like Monster & IrishJobs to advertise vacant positions. Good job from iQ Content - pun intended. Your thoughts are as always welcome.


Country list for developers

Monday, August 20th, 2007

Recently while looking for a list of countries to populate a country dropdown box I required I came across a real gem which provided me with exactly what I was looking for. An easy to work with comma separated file (CSV) of 273 countries and territories is available from http://www.andrewpatton.com/countrylist.html. The figure of 273 is based on what various international organizations count as countries.

The data contains not just your basic name column but many other columns too such as 2 letter code, 3 letter code (letter codes are useful for the value property of dropdown/select boxes), TLD, capital, currency code, currency name, telephone code and two or three others as well. Budding software and web developers should find this handy, I know it saved me a good bit of time anyhow. Incidentally the chap that maintains the list has the goal of visiting 50 countries before he’s 50, now that’s a goal.


Cheap as chips SEO link building

Monday, August 20th, 2007

We all know that links are what makes the world go round as far as search engine optimisation is concerned, they can literally be the difference between position 100 and position 1 in the search results. Links can be aquired on a shoestring budget, they can even be aquired with no budget at all. In this post I’m going to outline how I regularly get links for sites I’m working on.

Starting then - I download the lastest directory excel file from http://info.vilesilencer.com/. This regularly updated excel file basically lists all free SEO friendly directories (along with PR information) that are actively adding sites. At the moment there are about 500 directories maintained in the list. It’s quite a time consuming, tough and boring task but what I do is literally submit to each and every one of them, even those with a PR of zero. I’d say about 60% of links I get from these directories are of less than average standard but it’s a numbers game and every little helps.

Most of the time submitting to these 500 or so directories results in about 150 new links within four or five weeks of completing the last submittal. If you were to attempt something similar I recommend using two or three variations of your target keyword for your listing title because this is what most of the directories use to link to the listing URL and using only one variation might look like artificial link building to Google, MSN and the other search engines. Be sure to use a throw away email address as some of the directories will send you spam. If your not in an uber competitive industry these links will shoot you right up the rankings.

After I’ve a good base of links from free directories, I then move on to using content to get more higher quality links. If I’m promoting a site about something I’m knowledgeable in I will write up two or three articles myself, if however I’m working with a client that’s in the gardening industry for example which I know nothing about I’ll hop on to http://www.elance.com and have one of my regular content writers do up the articles. Elance have a tonne of talented freelance writers who can do up content about pretty much any topic for very moderate amounts of money.

Articles in place, I then use Google to search for sites which may be interested in publishing them in return for an embedded link of course. Examples of search queries which I might use include ‘keyword1 inurl:submitarticle’, ‘keyword1 keyword2 inurl:addarticle’, ‘keyword1 keyword2 intitle:”submit article”‘ as well as a couple of other variations. Sites which are returned from searches like this are highly targeted and therefore likely to publish your articles, these sites will most likely publish a lot of ‘guest’ articles however and thus your link(s) will be of lesser value.

What I often do is just search for related sites (without any ’submit’,'article’,'content’ type keywords) and simply just email the webmaster of each appropriate result asking him/her if they’re interested in publishing my articles. With this approach I find that I usually have about a 20% (assuming content is always at a high standard) success rate, which I think is pretty decent. This means that to get published 20 times (which equals 20 links) I would have to visit and email approximately 100 sites which is obviously time consuming but well worth it as it can result in some very high quality links which can do wonders for your rankings. After being published by a site I add that sites details such as url, topic, webmaster name, webmaster email etc. to a excel file so I can try and have content published on that site again at some stage in the future. Additionally I pretty much always submit my articles to the top ten or so article directories which usually results in another handful or so of variable quality links.

There you have it, cheap, perhaps free (if you can write content yourself) methods of link building. I use these same methods all the time when working with akamarketing.com and with clients and I’ve seen really good results. Be warned though that link building, no matter what approach you take is a very time consuming task. Happy linking!


Repeater paging with an SqlDataSource in ASP.Net

Friday, August 17th, 2007

The ASP.Net repeater control has over the last while become one of my favourite controls due to the fact it can be highly customised because it’s a templated control. Two shortcomings with the current implementation of the repeater control however are its lack of paging and sorting capabiltites. Of these I believe paging is perhaps the more desirable feature and thus I will now provide an outline of how to implement paging with a repeater using an SqlDataSource. I’ve chosen to work with an SqlDataSource as this I imagine is the most common underlying data source used with repeaters. 

PagedDataSource class
The most popular way ASP.Net developers enable repeaters to page through large amounts of results is with the help of the PagedDataSource class. This is a class which

Encapsulates the paging-related properties of a data-bound control (such as DataGrid, GridView, DetailsView, and FormView) that allow it to perform paging and is used by control developers when providing paging support to a custom data-bound control.

Since this class can be used to provide paging support to custom data-bound controls it can of course be used by built in data-bound controls such Repeaters and DataLists to provide the same support. If you look on the MSDN page which I have linked to above you’ll notice however that the class implements the ICollection interface. This means that any underlying source you want to feed into the PagedDataSource class (which in turns feeds into the repeater itself) must also implement ICollection. The SqlDataSource class does not implement this interface and thus we must put the data into some class that does implement ICollection, in this case we are using a DataView. OK lets look the c# code behind.

  1. public partial class _Default : System.Web.UI.Page
  2. {
  3.     PlaceHolder innerPlaceHolder = new PlaceHolder();
  4.     protected void Page_Load(object sender, EventArgs e)
  5.     {
  6.         Session[“pageNumber”] = 1;
  7.         Page_with_Repeater();
  8.         int totalResults = (int)Session[“totalResults”];
  9.  
  10.         //five represents the page size - could you session/viewstate
  11.         //to avoid hardcoding but for this sample it’s fine.
  12.         float numOflinks = ((float)totalResults / 5);
  13.        
  14.         //determine how many links to create/display
  15.         if (numOflinks % 1 == 0) numOflinks = (int)numOflinks;
  16.         else if(numOflinks % 1 != 0) numOflinks = (int)numOflinks + 1;
  17.        
  18.         for (int i = 1; i < numOflinks+1; i++)
  19.         {
  20.             LinkButton PagingLink = new LinkButton();
  21.             PagingLink.ID = “pagelink” + i.ToString();
  22.             PagingLink.Text = i.ToString();
  23.             PagingLink.Visible = true;
  24.             PagingLink.CommandArgument = i.ToString(); //used to detect result page required
  25.             PagingLink.Command += new CommandEventHandler(PagingLink_Command);
  26.             innerPlaceHolder.Controls.Add(PagingLink);
  27.         }
  28.     }
  29.  
  30.     public void Page_with_Repeater()
  31.     {
  32.         //SqlDataSource does not implement ICollection and
  33.         //thus will not work with PageDataSource we therefore use
  34.         //a DataView instead which implements all required Interfaces
  35.         DataSourceSelectArguments arg = new DataSourceSelectArguments();
  36.         DataView dv = (DataView)SqlDataSource1.Select(arg);
  37.  
  38.         //Instantiate an instance of PagedDataSource
  39.         //and sets its main properties
  40.         PagedDataSource PagedResults = new PagedDataSource();
  41.         PagedResults.DataSource = dv;
  42.         PagedResults.AllowPaging = true;
  43.         PagedResults.PageSize = 5; //CHANGE THIS ABOVE TOO
  44.  
  45.         int pageIndex;
  46.         Int32.TryParse(Session[“pageNumber”].ToString(), out pageIndex);
  47.         PagedResults.CurrentPageIndex = pageIndex-1; //because this is indexed based
  48.  
  49.         //after the PagedDataSource class is in place we can then
  50.         //feed this into the repeater itself
  51.         repeater1.DataSource = PagedResults;
  52.         repeater1.DataBind(); //repeater does not bind natively
  53.  
  54.         //configure paging number - Google Style
  55.         //to do this we dynamically create X amount of links based on the total
  56.         //results and the PageSize - we can’t create these buttons here as
  57.         //events will only run if added in design time or page_init/page_load
  58.         Control OuterPanel = FindControlRecursive(repeater1, “placeLinks”);
  59.         OuterPanel.Controls.Add(innerPlaceHolder);
  60.     }
  61.  
  62.     protected void SqlDataSource1_Selected(object sender, SqlDataSourceStatusEventArgs e)
  63.     {
  64.         //variable used to create X amount of buttons
  65.         Session[“totalResults”] = e.AffectedRows;
  66.     }
  67.  
  68.     protected void PagingLink_Command(object sender, CommandEventArgs c)
  69.     {
  70.         Session[“pageNumber”] = c.CommandArgument.ToString();
  71.         Page_with_Repeater();
  72.     }
  73.  
  74.     private Control FindControlRecursive(Control root, string id) { } //removed for clarity
  75.  
  76. }

All the important paging related code is encapsulated in the Page_with_Repeater() function, it’s all commented so I won’t repeat myself here. As far as what the code does, well it displays 5 rows of data at a time from the underlying datasource (in this case SqlDataSource1) in a repeater. LinkButtons are dynamically created and then added to the repeater to display page numbers as links to allow the user to select a specific result page. Often interfaces allow the user to select a specific page (as in this example) and to use previous and next buttons for working his or her way through data, on that note a good example of using buttons for repeater paging is given on the 4guysfromrolla.com website. 

The corresponding .aspx markup for the c# code is very simple and contains a repeater (named ‘repeater1′) with an embedded panel (named ‘placeLinks’) and an SqlDataSource (named ‘SqlDataSource1′) which specifies an event hander for the ’selected’ event in order for us to create the correct amount of paging links.

Incidentally you may notice in the C# code above that I have used a custom function called FindControlRecursive to enable me to add the dynamically created placeholder (which contains all the page number linkbuttons) to the statically created panel within the repeater. This is a handy function I came across recently and which I often use in conjunction with repeater controls (and many other controls too). It accepts a root control and an id of the target control to look for. It works in a similar way to the standard FindControl method except it searches all controls (including child controls) in a control tree hierachy whereas FindControl will only search the specific control you pass it without examining any child controls.

That’s it - paging with a repeater using an SqlDataSource is implemented. As you can see it is not too difficult. If you have any questions please feel free to ask. 

HOME | ABOUT US | CONTACT US | SITEMAP | GOOGLE SITE SEARCH | TOP
12 Lorcan Crescent, Santry, Dublin 9, Ireland +353 87 9807629