SEO analysis: camasi.com.ro

Hello, in this message I analyze, concerning SEO, the web site camasi.com.ro.

Some initial notes:
If you wish to print the current review, it’s best to print the PDF version, it has the biggest chances for both the images and special characters (diacritics) to show up correctly;
– I will repeat the most important things at the end of the review;
(+idee) – This text shows an idea, something new, original, which can be implemented in the web site; it’s a novelty for the web site; I will repeat them at the end of the review;
(+important) – This is how the points which I consider very important start; I will repeat them at the end of the review;

Contents:

A. Detailed SEO analysis
A1. First impression
A2. Accessibility and navigation ease by the search engines’ spiders
A3. Checking the web site in search engines
A4. Checking the hosting server and the domain
A5. The URLs
A6. Web site titles
A7. Content
A8. Meta tags (meta description, meta keyword)
A9. Redirects and localization
A10. Links from you to yourselves (internal links)
A11. Links from you to others (to exterior)
A12. Links from others to yourself (links from the exterior)
A13. Analysis of the traffic and of the searches
A14. Semantic HTML
B. Other good practices, other bad practices
C. Conclusions – important things + final conclusions

(top)

A. Detailed SEO analysis

(top)

A1. First impression

A1.1. Usability
Theory:
The website usability is important for the user experience, in general; there is also a secondary importance – if the search engines consider that you have a web site that doesn’t offer a pleasant experience to the users, this may have a negative influence on the positioning on search engines results pages; you should remember that, as a general rule, if you ensure a pleasant experience to the viewers, the engines will appreciate this;
It’s best when a web site has a sitemap and a search function; these are two very good elements to help navigation; by sitemap I mean something like this: Apple’s sitemap;
It’s best when any page of the web site can be accessed via a menu, or via a different page, or, at least, through a page like “Sitemap”;
Breadcrumbs are also a valid option;

Practice:
Search function – it’s present in the upper part of the web site, even though not in the upper-right corner;
Sitemap – you don’t have an HTML sitemap, see the theoretical part for solutions on that;
Accessibility – any page of the web site can be accessed via the categories, this if fine;
You don’t use breadcrumbs, it would have been better if you were;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): minor problems only;

Things to do next:
Create a sitemap in HTML format, perhaps implement breadcrumbs;

(top)

A2. Accessibility and navigation ease by the search engines’ spiders

A2.1. Navigation without JavaScript, cookies, Flash, CSS, forms, login, frames, splash (intro) pages
Theory:
Why is it a problem if some parts of the web site can’t be accessed without JavaScript? You can see here – W3 Schools browsers statistics – that the percentage of the visitors who don’t use JavaScript is around 5% of the total visitors of a general web site – this is a huge number! 5% of your visitors can’t navigate properly in the web site, if you use JavaScript for this; but from the search engines’ perspective, the problem is even bigger, there are real problems at navigating a web site which is based on links generated via JavaScript Google’s solution to crawling via JavaScript;

Also, the engines don’t generally use cookies; details on this on Google’s web site;

Although Flash can be indexed, my suggestion is to avoid using Flash for navigation elements (menus, links); details, again at Google’s;

CSS can be used adversely to hide elements in a page; in other words, you can write a text portion just for the search engines and hide that text to regular users, by using CSS; this is a negative behavior; you can read more on Google web site; the search engines don’t interpret very well CSS; thus it’s best to have a web site that can be browsed without CSS;
Read Google’s guidelines for details;

Also, a web site should be possible to navigate without using forms – for example, if my web site is a dictionary and on the very first page there is a form which forces me to insert a word to find a definition, there is a problem; the engines will not insert, randomly, words in the form to find all the possible combinations; you can use, in this situation, alternative solutions to navigate the web site (links to words, a sitemap), and navigation by filling out a form should be just an option out of more possibilities;

The same goes for the login option – if my web site is a social network and all of the site contents is “hidden” behind a login page, the engines will lack the option of browsing the web site;

Frames (displaying two pages, with different URLs, within the same browser window) are a web development solution which can make the browsing by the search engines difficult;

Intro pages are pages displayed prior to entering the web site; a lot of times they are used for purely esthetic reasons, ignoring functionality; their main problem is that they move the web site content to yet another click away from site entry, forcing you to wait for the page to load and click on it to be able to visit the web site; I advise you not to use this for reasons that are SEO related, mainly, but also for usability reasons (the first time you see a intro page may be interesting, the second time you are already annoyed by it);

To test a web site without JavaScript, in FireFox go to: Tools => Options => Content => Enable JavaScript (uncheck); after you test it, you should return to the initial settings;

To navigate the web site without cookies, Tools => Options => Privacy => Use custom settings for history => Show cookies => delete only the cookies on your web site, by doing a search => Close => uncheck Accept cookies from sites; after you test, you should return to the original settings;

A web site can be seen without Flash this way: Tools => Add-ons => Plugins => Shockwave Flash => Disable; don’t forget to set Enable after testing;

How does a web site look without CSS? I recommend you to use FireFox and choose View => Page Style => No page style (you will choose Basic Page Style to return to the previous setting);

Practice:
a. Navigation without JavaScript – the search option no longer hides “Nume” and “Email” on the newsletter box; also, I can’t add items to the shopping cart; also, I can’t zoom in on images; the problems are pretty bad; other than that, the navigation was fine;
b. Navigation without cookies – I can contact you, use the search, browse the web site; I can’t login to the web site and I can’t add any products on the shopping cart; quite a bad solution for those who don’t have cookies activated – they can’t order anything on your web site;
c. Navigation with deactivated Flash – I can’t see the slider from the top; quite important part of your navigation;
d. Navigation without CSS – I can browse the web site without the CSS loading; it doesn’t look that good, but overall it’s fine;
e. About navigation using forms – although you do have a search box, a contact form and a login page for the WordPress blog, they’re all fine; I can navigate most of the web site content without forms;
f. About navigation by logging into the web site – I can’t send an order without filling out a form, but this is fine; you can still live without that content being accessible;
g. You don’t use frames;
h. You have no splash page;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): average problems;

Things to do next:
I’d look into the problems of JavaScript and the cookies; I should be able to order without cookies or JavaScript activated;
The navigation without Flash can be improved slightly if you use static banners, instead of Flash ones, yet it’s a minor issue;

A2.2. Presence of robots.txt and sitemap.xml
Theory:
I expect the file robots.txt: first of all to exist, and second of all to contain a link to at least one of the files:
i. sitemap.xml;
ii. sitemap.xml.gz;
iii. sitemap.gz,
so that the search engines know that you have an .xml sitemap;
You can read here about robots.txt files – robots.txt organization;
As a general rule, you should avoid using disallow without know very well what you are doing (example: reasons for site security – maybe you wish to change some texts on a daily basis, and you don’t want this thing to be seen on Google cache; or you may hide the login and admin pages completely secured and away from the eyes of the search engines);
On the other hand, used without precautions, robots.txt can create some problems;
A lot of people use Disallow for SEO reasons – Page Sculpting; I recommend you to read these articles: nofollow problems, Page Sculpting is nonsense before doing such things;
How to create a sitemap.xml file? You can find here – XML Sitemaps Generator – a sitemap generator for a regular web site, smaller than 500 pages; a best practice would be to run it from time to time, as you update your web site; also, here – Google Sitemap Generator – you will find a plugin for WordPress;
Depending on the platform of your web site, you can have an automated sitemap.xml generator, or you can develop one in-house (it’s not very hard); if your web site rarely ever updates its structure (new files, new pages), you can use the above generator for manually update the sitemap.xml;
If your sitemap.xml is very large, you can consider splitting your file into more pieces;
Also, it’s best that the sitemap elements contain priority of each file (thus, you can “inform” the search engine how important is one page) and the date for the last change;
Final note: don’t forget to use Google Webmasters Tools to add your sitemap.xml into it;

Practice:
Robots.txt – the file doesn’t exist, this is bad;
Link to sitemap.xml – no;
Disallow – you don’t have robots.txt, thus you don’t use disallow;
In the future, pay attention to how you use disallow, not to block any engine;
sitemap.xml, sitemap.xml.gz, sitemap.gz – none of these three files exists;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): average;

Things to do next:
If I were you, I’d create a sitemap.xml file and link to it from robots.txt (which also has to be created);

A2.3. Navigation test (crawling)
Theory:
I will test the web site with Smart IT Consulting – Googlebot spoofer, with SEO browser and with JSunpack – JavaScript Unpacker; I will also do a test navigation using Lynx Viewer;
Also, you should note that if you follow the rules of W3C (World Wide Web Consortium) you can be sure that the web site is well navigated (crawled) by the search engines; W3C rules are standards that the creators of Internet browsers – FireFox, Opera, Safari, Internet Explorer etc. – try to follow; if you follow those standards, in theory both the browsers (that display your web site to the users) and the search engine spiders (a basic aim for SEO) will navigate the web site very well;
According to W3C tests you can see the current status of how well you follow the guidelines of W3C; do remember that, although it’s optimal to have 0 errors, most of the times it’s better to have a look on the errors and solve only those problems that affect the navigation by the search engines in a significant amount; have you forgotten to put ALT attribute for an image? That’s very bad for image indexing, but the web site can still be indexed; have you forgotten to close a tag that affects the correct display of the web site? Perhaps you should correct this; remember, as a general rule, that permanently seeking to have 0 errors on W3C on any page of the web site will only help little, if you are talking about minor errors;
I will do the test of W3C for the main page of the web site; in the future, it might be a good idea that if you will have doubts that a page is not index, to check that page with W3C testing (in other words, you can test any page of the web site, not only the homepage, and it’s best to run those tests when you have a real need for it);

PS: More than the external testing, you can do a test yourselves; just check out the Crawl errors sections from Google Webmasters Tools – you will see there the potential crawling errors; other search engines have functions similar to Google Webmasters Tools: Bing and Yahoo! Site Explorer;

I will also test the CSS and the links from the web site;

Practice:
Testing with Smart IT Consulting – Googlebot spoofer: no major problems; some parts of the web site are missing (as interpreted by the search engines), some external elements in the page (JavaScript and Flash, for example), don’t load, but the navigation is possible;
SEO browser test: I could browse the web site very well;
Below some data which you should retain, they will be presented again in a different form later in the report:
Page loaded in: 0.644 seconds
IP address: 86.105.167.6
Country IP: Romania
Lynx Viewer test: no problem was fine, everything runs smoothly;
JSunpack – JavaScript Unpacker test: there are no malicious or suspect elements;
W3C testing: 15 errors, 0 warnings; I would have a look on the errors and fix the important ones;
CSS test: 4 errors, 0 warnings, it’s pretty OK, but do have a look on the errors;
Link checking: things look fine;

I have personally checked your Google Webmasters Tools, Bing Webmaster Center and Yahoo! Site Explorer – no problem found generally; I do have to warn you about these GWT errors and these other GWT errors;

No crawl errors on Bing or Yahoo!;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): medium gravity problems;

Things to do next:
I’d solve the W3C HTML and CSS errors (only the ones which are really important);
Very important is to have no pages on the web site which have problems with crawling, duplicate meta titles and so on, so log into Google Webmasters Tools and fix the problems;

(top)

A3. Checking the web site in search engines

A3.1. The search [site:camasi.com.ro]
Theory:
a. Have most of the pages been indexed?
We compare the results from sitemap.xml (or we just count the pages in the web site, if sitemap.xml doesn’t list all the pages) with searches from the search engines:

We get a number of pages indexed in each search engine;
If you want to increase these numbers, you have to make sure you do two things:
i. You don’t “block” the search engines’ access into the web site; you can do this via a correctly configured robots.txt, by not using cookies, JavaScript, Flash, by following W3C guidelines (all of these have been described earlier), by robots meta tag-ul robots (described below); so, as a first solution – don’t block robots’ access;
ii. You “invite” the engines to visit your web site; what is an attraction to a search engine? What drives it and motivates it to visit link after link on your web site? What makes it come back and search for new texts? There are many criteria, mainly Google must see your web site as a quality web site; how does it do that? Mainly, if you have a lot of links (i.), from quality web sites, so they are not spamming and doing things at the border of legality (ii.), links are from web sites that have, on their turn, a lot of links, are trustworthy and come from web sites that are important in the Internet (iii.), the search engines will favor your web site and will better index it (will visit it once, will navigate as much of it as possible, will come back often);
The two reasons above synthesize an important percentage of the reasons for which a web site is well indexed; if you want a full, exhaustive list of the factors for which a web site is higher in rankings (and, thus, may be considered “appropriate” to be indexed), you can see a very good article at Search Engine Ranking Factors 2009 – SEOmoz;
As a side note, it is possible that the search engine shows more pages than you have pages on your web site; yes, it may seem absurd, but Google can “see” more pages on your web site than you actually have; what causes can there be for such a behavior? Duplicate files (the www. and non-www. version of the web site – analyzed below) or old pages on your web site, which no longer exist in the structure ofr your web site, but haven’t been deleted from the server and, although do not have links from other pages of the web site, can still be accessed by the search engines; anyhow, prior to deleting an old page from the web site, first read this:
Wrong Page Ranking in the Results? 6 Common Causes & 5 Solutions;

b. Is the homepage first on the list of search results?
If the homepage is not the first in the list, there are two explanations:
a. One or more pages have attracted more links than the main page;
b. Google has penalized the first page, from various reasons;
Although there is no direct correlation between the presence of another page than the homepage as the first in the list, it is preferable that the first page in the lists to be the homepage itself;

Practice:
a. Have most of the pages been indexed?
You don’t have a – sitemap.xml file; excluding the blog, and the results from the search pages, I estimate the number of pages on your web site to be about 800;
You have no pages in the HTML sitemap, since you don’t have a HTML sitemap;

Search engine searches:

  • Google (533 indexed results);
  • Yahoo! (37 indexed results);
  • Bing (472 indexed results);

I can say that most of the pages on your web site have been indexed by Google and Bing; on the other hand, Yahoo! has indexed only in a small proportion the web site;

Overall, it is a satisfying result for Google and Bing, and less satisfying for Yahoo!;

b. Is the homepage first on the list of search results?
Yes, the web site www.camasi.com.ro is first in the list at all the engines;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): low gravity problems;

Things to do next:
See the solutions in the theoretical parts to increase Yahoo! indexing;

A3.2. Look for the brand name at the search engines
Theory:
a. Searching for the name of the company/brand
Searching for the name of the company/brand for a web site with a certain presence in the Internet should return on the first position the web site itself – thus, camasi.com.ro should be the first;
If this doesn’t happen, there is a possibility that the search engine considers the web site as spamming (or perhaps the web site hasn’t received enough links, another atypical thing for a web site which has a certain age on the Internet);
I chose to favor the searches in Romanian, because most Internet users in Romania do such searches – localized;

What should be done to increase positioning? First of all, if you are involved in spammy practices (links from obscure sources, you try to fool the search engines in one way or another), you should stop these practices; concerning links, if you have enough of them and they are quality links, normally the first site in the searches should become camasi.com.ro; thus – get more links!

b. Searching for top keywords
Also relevant for positioning are the searches for keywords that are traffic relevant; if the web site is well optimized internally (which means it has a good architecture) and externally (it has valuable links, from important web sites, from many web sites, from its activity niche, and with relevant anchor text), then it should be well positioned for the desired keywords; if the web site doesn’t have good positioning for some keywords, you can study the SEOmoz guide for rankings – SEOmoz Search Engine Ranking Factors – and implement as many of the advices in there as possible; you should note that internal optimization (what you do on your own web site) is important for about 20% of the positioning factors, while external optimization (links that are put by others on your web site) account for 80% of positioning factors; thus, in this case also, put emphasis on getting more links;

Practice:
a. Searching for the name of the company/brand
The results for searching [camasi] at:

  • Google (position 1 out of 10 results, pages in Romanian are advantaged at this search);
  • Yahoo! (position 1 out of 10 results, pages in Romanian are advantaged at this search);
  • Bing (position 1 out of 10 results, pages in Romanian are advantaged at this search);

You do well with search engine positioning; it should be noted that the Yahoo! search didn’t bring the homepage as the very first result;

b. Searching for top keywords
According to Google:
[camasi online] (1st place);
[camasi barbati] (1st place);
[camasa] (you don’t appear in the first 10 results);

(+important) You occupy top positions for searches related to your brand, you event have good results for the searching for top keywords; I’d look into the “camasa” issue;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): big problems;

Things to do next:
I would look on the solutions in the theoretical part to increase positioning for competitive keywords;

A3.3. Checking content
Theory:
a. Hidden content in the web site pages
Procedure: I visit Google cache for each of the pages (to see Google cache of a certain page, in this URL you should replace target_URL with your desired URL; I have done this for each of the 5 pages analyzed); then I select all the text in the page, to see if selecting the text shows a text written with the same color as the background; then for each of the previous links I choose text-only version (this will not load the CSS file, thus making the page look very rudimentary), to compare the original page with the results saved on Google’s server;

Situations: if the chosen pages don’t appear in Google cache, there are two hypotheses:
i. The pages have not been indexed by the search engine (either that page has recently appeared and Google hasn’t begun indexing it, or Google has had problems in reaching via regular crawling to the page, or – and this is a serious problem – Google penalizes you by not indexing your web site for some unorthodox practice; there is also the possibility of that page being considered less important by the search engines and for this reason it has not been indexed);
ii. The pages have been blocked from indexing; details here- avoid search engine indexing – on how this could have happened (you should generally do the opposite – make sure Google indexes you);

b. Checking for duplicate content (the content on your web site should not be found on other web sites)
The situation is typical – a web site publishes an extraordinary article and benefits the traffic, link, marketing buzz; the competition sees this thing and, with your permission or not, legally or not, copies the content; there is a problem, though: if an Internet users searches for a fragment from that extraordinary web site, which site will be shown? Will it be the web site that started the buzz, or will it be the web sites that copied the content? Google must decide this very well; ideally, you shouldn’t copy content from other web sites and others should not copy content from your web site;

There are a few possible situations:
i. Only you appear in the results – this is optimal;
ii. You appear in the results, among others (duplicate content), but you are the first in the results list – you can avoid copying content if, when you publish content on other web sites on your own, you publish different content on other web sites; so, you can make a content for your web site and a different content for your partners; the problem is not that big if you appear first in the search results, but this isn’t content, and it’s a shame if for your content, other web sites rank first; but it’s harder to avoid being copied by others;
iii. You appear in the results, among others (duplicate results), but you are not the first in the results list – it is a problem if someone looks for a page on your web site, and your web site ranks second in the results; you can apply the solution on point b. (different texts on other web sites), or you can increase the value of your web site in the search results; how do you do that? By gathering more links to that specific page and to your web site in general; also, if someone copies your text, it’s best if you have a link in that text, saying that the text originated from your web site; you should also try to make your web site very crawlable, so that the first web site that Google indexes is yours, not your competitors (this can be done with links to your web site and with a good crawling architecture); please remember that this is not about you copying content from others, or others copying content from you; what this is about is that Google considers that your page is less relevant than others, and shows it on a lower position; the satisfaction that you haven’t copied isn’t helpful in this situation – the benefit of traffic is of the first site shown in the results list;
iv. You appear in the results, others appear also – this is a good news for your product (people can find out about your product), and not a good news for you – you aren’t to be found; see the second solution on point c.;
v. You don’t appear in the results, other web sites don’t appear either – this is problematic, you have an indexing problem;
Solution? If others have copied content from you, see this guide: The Illustrated Guide to Duplicate Content in the Search Engines; if you are the ones with duplicate content issues (you have on your web site more pages with the same content, or you take content from other web sites), look here: SEOmoz on duplicate content;
Regarding indexing problems – check out two things – your web site should not block indexing and it should be attractive enough so that it “invites” the search engines to visit it (see the previously given solutions at 3.1);

Practice:
I will analyze the following 5 pages on your web site: 1 – Informatii generale, 2 – BOTANY BAY, 3 – LUCIO GREY, 4 – PABLO, 5 – KIRK BLUE;

a. Hidden content in the web site pages
Of all the five pages, all of the pages were indexed, thus I could check for hidden content on all of them; no problem was found with hidden content, and you have no problem with lack of indexing;

b. Checking for duplicate content (the content on your web site should not be found on other web sites)
I have made 5 searches for duplicate content on Google, corresponding to the 5 pages above:
1: out of the ten results, you don’t even appear; duplicate content problem everywhere in here;
2: a duplicate content with an article, but it’s pretty fine, you are on the first position, and the second one is not that relevant;
3: You occupy the first position, no duplicate content on the results page;
4: Duplicate content on okazii.ro, you are on 1st position;
5: Duplicate content on okazii.ro, you are on 2nd position;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): average problem, issues with the duplicate content;

Things to do next:
See the solution in the theoretical part;
you should avoid being copied and not post the same content on other web site yourselves;

(top)

A4. Checking the hosting server and the domain

A4.1. Checking both the www. and the non-www. versions of the web site
Theory:
When some people are entering a web site, let’s say Google, type in www.google.com; other people, used with using the Internet, will not type www. in front of the domain and, for speed, will only type google.com; in both cases, the correct page will open; how is the situation in your case? I will test below, the option without www. and the option with www.; When I will visit these two pages, the redirect function should work and it should only be, in practice, one URL;
Also, I will test 14 other pages (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14), generated by using various programming languages; normally, when I will test all of these pages, I should reach either the homepage (via a redirect), or an error page;
What happens if the redirect fails or I don’t receive an error message? In this situation, there is a problem (a big one, I might add) with the duplicate content! Solutions: use the canonical tag (if you wish to keep the current option, without any redirect) or redirection (and you will only have one version of the pages);

Practice:
(+important) The option without www. doesn’t redirect, as it should, to the option with www.; very big problem!

Out of the 14 pages tested (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14), I have found problems at the pages 5 and 6; thus, you should redirect any of the pages (including 5 and 6) to the homepage or show me an error page; this is a big problem for duplicate content;

As a side note, the error page given on most of the pages is returning a very cryptic error mage, you should have a better 404 page, but I will come back to this later;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): very big problem;

Things to do next:
The most important thing is to make one of the www. or without www. versions of the web site as the main one;
Also, you should solve the index.php problem;

4.2 Testing a properly made 404 page
Theory:
I test if a page that doesn’t’ exists on your web site loads; normally, this should happen:
a. I should receive an error message which should inform me that the page doesn’t exist; the message should not be unclear (“404 error: the server could not load the page” – OK, what should I do now?); instead it should be understood by everyone (“The page that you have requested doesn’t exist on the web site, we invite you to search for a different page on the web site”);
b. I should have in the page a search box; it is possible that the web site has a search box permanently visible on the web site, including in the 404 page; it is nicer, though, that in a 404 page you repeat that box right beneath the text which informs the visitor that the page doesn’t exist; so, if the pages of the web site the search box is typically found on the right, you should also have it in the text itself;
c. (optional) It would be useful to also have a web site summary; if the page doesn’t exist and I have entered your web site, give me something to do; the most visited pages of the web site, or the pages most recently added are good examples of pages that can be shown; if we are talking of a small web site (under 20 pages), you can show all of the pages in the web site;
d. If you have a very large web site, it is best that you have a link to the page “sitemap” (example: Apple’s sitemap);
e. (complicated to implement, but very useful) If you notice that the user has entered your web site from a link that looks like this:
www.website.com/page-about-napoleon-generals,
you can automatically do a search within your web site for [page about napoleon generals] and show the visitors the results for that query;
f. If your web site doesn’t inform you automatically each time a person enters a 404 page, you can also include an option for the visitors to be able to send to the site administrator a short message;
h. Another good option is to have a link to the main page;
i. You can inform your visitors that they may have written a wrong URL, and check to see if it was properly typed;
j. Finally, against all my advices above, you should make a simple page; most of the times the visitors just wants to leave the page fast; don’t force the visitor to read a lot of text, make it brief;
I will give bellow some details on the code that should be returned by a page;
See details on how should a 404 page look like: Creating user-friendly 404 pages;

Practice:
The 404 page shows a message:
“Not Found
The requested URL /pagethatdoesntexist was not found on this server.”;
I would say that the messages are pretty unclear (“not found on this server” – what?), you don’t give a link to the homepage, there is no search option, and the navigation in the web site is not present; all-in-all, a pretty badly made page;
The page does return a 404 header, which is fine;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): average problem;

Things to do next:
I would improve the 404 page, as suggested in the theory part;

A4.3. Checking the IP and all the domains hosting on the server
Theory:
a. Testing the IP address
Selfseo.com will tell me the IP (Internet Protocol) address of www.camasi.com.ro;
I can also check the IP with the ping command:

I will find out to which country is the IP allocated to by using the database of RIPE.NET;
It is preferable that a web site from a country has an IP allocated in that country;

b. Testing the domains hosted on a server
Yougetsignal.com will show me, generally with a pretty precise reliability, with data that is also confirmed by Bing, how many domains are hosted by the same server with camasi.com.ro and which are those web sites; it is preferable that that number be as small as possible (your web site should be hosted to only a few other web sites), for one reason only: if one of the web sites hosted starts doing unethical things (spamming, promoting illegal activities, is in a “grey” to “black” zone of the Internet), automatically the rest of the web sites hosted on the server, including camasi.com.ro, will be flagged as web sites with possible problems; if there are more web sites hosted on a server, there are already more flags; this doesn’t automatically mean that your web site will have problems, but, as a general rule, it is best to be hosted on a server on which your “neighbors” are ethical, and a very good way to make sure this happens, if you don’t control the rest of the web sites, is to be hosted on servers which host only your web site and, as a maximum, only a few other web sites, which you can trust;
As a side note, in the list of the domains hosted by the server, it should, typically, also be found your web site;

Practice:
a. Testing the IP address
According to selfseo.com, your IP is: 86.105.167.6; doing the “ping” command, I get a confirmation of this aspect;
RIPE.NET tells me that the attributed country is Romania;

I’ve also confirmed the IP address above (A2.3. Navigation test (crawling)), at the seo-browser.com test;

b. Testing the domains hosted on a server
According to yougetsignal.com there 228 are domains hosted on the same web server;

Bing confirms this thing;

It is a very big number, things are very bad;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): big problems – the number of domains hosted on the same server is large;

Things to do next:
Although the number of the domains hosted on the same server is large, if none of them are doing illegal things, you can have zero problems with that; in the future, I’d look for a hosting solution which offers me a dedicated IP address;

A4.4. Check the company’s history
Theory:
Internet Archive WayBackMachine (WBM) is a service which saves to pages on the Internet to archive them; since the Internet is permanently changing, WBM revisits, generally with a certain periodicity, a web site and saves the changes; so, by using this web site, the Internet users can see how did a web site look like in the past;
It is possible that the search engines use the web site to find out details on the history of a web site; if a web site hasn’t been indexed by WBM, there can be two explanations:
a. Few links to the web site have made the web site to appear as unimportant to WBM;
b. A web site such as camasi.com.ro can choose not to be indexed by WBM, for various reasons (for example, you may not like that today’s Internet users can see a design of your web site which is a few years old; if you wish to stop indexing, you can either block indexing, or eliminate the pages that have already been indexed);
c. A screenshot of the web site which has been made today will be shown only after about 6 months, so a web site can be in the index of WBM, but the information is not yet public;
Therefore, the absence of WBM is not a reason to be worried, there can be multiple reasons for which a web site doesn’t appear in the index; on the other hand, the presence in the web site guarantees, somehow, that the web site has a past which can be considered good;
Also, if a web site exists in WBM, it is very good that in the past the profile of the web site is identical or very much alike with the present-day profile of the web site (if we are talking about a web site which has done for 10 year car sales and now is doing audit services, it is a rather unpleasant situation);

Practice:
I have tested with Internet Archive WayBackMachine the two possible options of the main page: with www and without www;
You have no appearances in the web site; it is not very well, but this isn’t a big problem either;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): small problems;

Things to do next:
Do not block the access to WBM, in time you will probably be indexed;
(top)

A5. The URLs

A5.1. One URL / page (avoid having a tracking URL / different parameters)
Theory:
In the analysis I will look at the page architecture; more than the existence of a page with or without www. (for example, the pages:
www.site.com/random-page and
site.com/random-page
should redirect to the same page), I will also check the existence of parameters:
site.com/random-page?id=20&num=30〈=ro&sort=down&page=4
this is a rather poor example of a page URL;
Ideally, a page should not contain parameters; the reason? Most of the times, for the pages with parameters, there is a problem of duplicate content (same content, or a very similar content of a page is shown for more distinct URLs);
Even if it contains parameters, a page should not have more than two:
www.site.com/cars.php?model=skoda&color=gray
is OK, but this URL:
www.site.com/cars.php?model=skoda&color=gray&doors=4&engine=diesel&price=descending&windows=power
has too many parameters;

Practice:
The option without www. doesn’t automatically redirect to the option with www, this is rather unpleasant;

The products in the web site have URL parameters; the product pages and categories have one single parameter, this is good news:
www.camasi.com.ro/produs.php?pid=223
www.camasi.com.ro/categorie.php?scid=5

As a hint, even if this is a small problem only, it would be preferable if you haven’t had any parameter in the URLs;

Also, an optimized URL for both the products and the category would look like this:
www.camasi.com.ro/camasa-flying-dutchman-albastra/
www.camasi.com.ro/camasi-regular-fit-maneca-lunga/

For the product pages you can consider this also:
www.camasi.com.ro/camasi-barbati/camasa-flying-dutchman-albastra/ or even
www.camasi.com.ro/camasi-regular-fit/camasa-flying-dutchman-albastra/

Regarding search, the pages with the search results look like this:
www.camasi.com.ro/cautare.php
So, I can’t send a search query to another person; if I want to send to a friend a link to a search, I can’t do that on your web site (usability problem, more than a SEO problem);

Have a look at this video:
Whiteboard Friday – Faceted Navigation for details on site navigation;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): big problems;

Things to do next:
I would focus on removing the URL parameters in all of the web site and making the URLs SEO friendly;

A5.2. The URLs should be short, descriptive (contain keywords), written in small letters and use the appropriate separator
Theory:
A long URL looks like this:
www.site.com/articles/fish/raise/this-page-is-all-about-raising-fish-in-natural-environments-article204848.html

A shorter URL would look like this
www.site.com/rising-fish-natural-environment

It can be noticed:
a. I have eliminated some folders, if they are not useful to the page (“/articles/fish/raise/”); for the current situation, the folders are not useful, since I can already determine the URL of the page that the page is about raising fish, the folders are only repeating information; if, instead, the folder would be something like:
www.site.com/2001/08/rising-fish-natural-environment,
which would show me that the article was written in August 2001, this information is useful to me, and I would keep those folders (it is a system frequently used on the blogging platforms, especially WordPress); it is recommended that a URL doesn’t have more than 4 subfolders; therefore, this would be a maximum:
www.site.com/subfolder1/subfolder2/subfolder3/subfolder4/page.html
b. I have eliminated the words that are not searched for in the search engines – are not keywords, are just there due to the title of the page, and bring no real value to the search (“this-page-is-all-about-“);
c. I have removed the prepositions and the words named”stopwords“, which, in general, are ignored by the engines, because are too common (“in”, “about”);
d. I have removed the article ID; if I will have two articles in the web site with the same URL, I would add to the article which has been inserted second the phrase “-2” at the end of the URL:
www.site.com/rising-fish-natural-environment-2
e. I have eliminated the final extension (“.html” , it can also be “.php”, “.aspx”, “.htm”, etc.) – this is a very small impact solution; if all of the URLs of the web site end in .html, it makes little sense to change the whole web site architecture, but if you ever redo your web site, please retain this aspect – “.html” at the end of the URL doesn’t help and, by eliminating it, you only get a very small advantage;
None of the changes above are mandatory; on the other hand, by using them all, you will get a powerful effect of URL shortening;

Why should you aim for short URLs? Aren’t the long one also good? It is all about:
a. Keyword density – if someone searches at Google for [raising fish], the search engine will show, with a priority, the URL for which the rapport between what I have searched at Google (“raising fish”) and the total words in the title is as small as possible; a short URL:
www.site.com/rising-fish-natural-environment
has a keyword density of 2 (“rising fish”) out of 5 (“site rising fish natural environment”) = 40%; on the other hand, a long URL
www.site.com/articles/fish/raise/this-page-is-all-about-raising-fish-in-natural-environments-article204848.html
has a density of 2 (“rising fish”) out of 15 (“site articles fish raise this page is all about raising fish in natural environments article204848”) = 13.3%; in other words, at a search result the page with a bigger advantage is the result with the better density (bigger one) of keywords;
b. Usability – a very long URL will probably be displayed in the results of search engines in a smaller version; also, the users will be able to tell very fast what the page is about for a small URL; give them, in the URL, the essence of the article;

As a side note, in many cases it is preferable to make the URLs very well right from creating the web site, but when weighing the results, they don’t justify the change of the URLs for an entire web site, once the web site is finished;

Generally, it is preferable to have URLs written with small letter, the usage of big letters in URLs is not a problem, but it may lead to confusions; the Internet users are used to type a URL with small letters, instead of big ones;

Regarding the keyword separator, in a URL you can separate the keyword by:
a. Nothing:
www.site.com/risingfishnaturalenvironment
(this is an option I don’t recommend, only in the case of a domain name www.mygreatnewwebsite.com you should avoid using a separator, for any other case using one is recommended);
b. Space:
www.site.com/rising%20fish%20natural%20environment
(this URL will be shown in FireFox address bar like this:
www.site.com/rising fish natural environment,
but when you copy the URL, it is shown as above, with %20 as a separator);
Using space a separator is rather not recommended, it is not an appropriate separator;
c. Underscore:
www.site.com/rising_fish_natural_environment
Let’s say that this is a tolerated option; it is not that bat, but it’s rather not recommended;
d. Hypen:
www.site.com/rising-fish-natural-environment
This is the recommended option, excellent, top marks;

Practice:
The URLs of the web site are generally short, but you do have URLs which have been shortened in Google:
www.camasi.com.ro/photo-gallery.php?pid=210&index=1
www.camasi.com.ro/photo-gallery.php?pid=203&index=0
www.camasi.com.ro/photo-gallery.php?pid=223&index=2
www.camasi.com.ro/photo-gallery.php?pid=211&index=0
(these are photo galleries on your web site);

All of these URLs have been “cut” by Google;

How it can be improved:
a. Avoid the parameter (“210”, “203”, “223”, “211”);
b. Use keywords, instead of parameters;
c. Keep the URLs short;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): big problems;

Things to do next:
I would rewrite the URLs according to the instructions;
(top)

A6. Web site titles

A6.1. To be unique (without titles that repeat themselves on many pages), descriptive (it should be clear what the page is about from the title), contain keywords (things that are searched for in the search engines), use brandname (contain the company name in the title) and, despite all of these, to be short (not longer than 65 characters)
Theory:
I personally think that the titles of a web site are the most important on-page element; so, it’s very good to have well made titles, for maximum SEO results;

For testing I will use a Google search with 100 results on a page;

Regarding unique titles: the search engines prefer to show, in the page with results, unique things; if all the titles of a page would be unique, we could, in theory, get results such as this one:

If you look in the image, you will notice that none of the titles has any personality, they all look the same; this is a problem; the search engines don’t want to deliver an unpleasant experience to the user, so they will do everything they can to only show those results with an unique title; do you have many titles in your web site with identical titles? This is a problem, the search engine will probably remove part of those results; this is done, as said before, from the wish to deliver pleasant experiences to the end user;

Regarding descriptive titles, it is optimal that the in the title of the page I can tell what that page is about; in other words, a title should synthesize as good as possible the content on the page;

On the question of the keywords used in the content of the page, it is best to use keywords that are relevant to the content on that page;

For the brandname – your company web site should contain right from the title the name of the company; as a good practice, it is preferable to have a title like:
Bosch washing machine with 2 compressors – Home appliances – OnlineStore.com,
rather than:
OnlineStore.com – Home appliances – Bosch washing machine with 2 compressors;

The principle is simple:
Keywords for the current page – Category (if applicable) – Brandname
In other words, you should put the unique part of the title (in this case the name of the product) right at the beginning and the part which has the tendency to be repetitive (the category and, even more repetitive, the brandname) at the end; in exceptional cases, when the brandname is more relevant than the name of the page and is a short brandname, you can put it at the beginning:
Microsoft – IT solutions for companies;

Regarding the length, the things are simple: a title should not be longer than 65 characters in length; titles which are longer than this will be displayed in a shortened version in search engines results; as a solution to this problem, it is optimal that right from the content management system (CMS) you set the titles to be short; in other words, each time you insert a page into the web site, you should automatically check that the title isn’t longer than 65 characters;

Note: a typical problem in setting titles – web site creators have a tendency to optimize a whole web site, not just a page, for a keyword; so, if own a web site with vegetables and I’m interested in a very good positioning for, let’s say, “potatoes”, a common tendency is to put the keyword “potatoes” in all of the titles for all of the pages of the web site, no matter which is the profile of the page (so the page with cabbage will also contain, in the title, the word “potatoes”, although there is no connection between the two); this solution is inefficient; a single page should be optimized for the “potatoes” keyword (the homepage, for example, is usually advantaged in positioning, therefore you can optimize the homepage for the desired keyword) and on other pages you should optimize for different keywords; or you can make a dedicated page for the “potatoes” keyword and that’s it; you shouldn’t put the same keyword in all of the titles in all web site; it’s not useful, it doesn’t help and Google may consider you have duplicate content; if you try to “sell” to Google a lot of pages with all sorts of vegetables, optimized, all of them, for “potatoes” keyword, Google might consider that some pages just repeat, without any purpose, the content of the page with potatoes; in other words, putting a keyword in a lot of pages doesn’t help with positioning of a certain page, but it does hurt all of the other pages on which the keyword is present;

Practice:
According to Google: you have more pages with the same title:

“www.camasi.com.ro” for all of these: 1, 2, 3, and the list could continue;

“Magazin online camasi barbati – product of Condra” for all of these: 1, 2, 3, 4;

“SLIM FIT_ Maneca Lunga – product of Condra” for all of these pages: 1, 2, 3 and the list could continue;

Solutions:
– Have only one version of the homepage;
– Put titles and ALT texts for the pictures on your web site (name the photo for ALT, filename and title);
– For multi-page categories, use “page 2” as a mean to differentiate between pages;

The titles are generally descriptive, I don’t like the titles for the photos:
“www.camasi.com.ro”;
Very poor choice;

The pages contain keywords, like “Camasa” or “Camasi”;

The brandname is there also:
“SLIM FIT_ Maneca Lunga – product of Condra”;
I have a bit of a problem with the title of the page, though; what’s with this “product of Condra”? In my opinion, it would be better if you named your pages like this:
Product – Category: Camasi.com.ro or even
Product – Category: Camasi;

That’s it; you only use “Condra” in some areas of the page, which make it confusing; I’d use “Camasi” as a brandname, or put “Condra” even more in front, right now it’s somewhat hidden;

Due to the fact that you use poor naming for photos, Google even decided in some cases to give automatically a title to the pages:

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): big problems;

Things to do next:
Get rid of duplicate titles, keep on having short titles, use a better brandname in the title;
(top)

A7. Content

A7.1. Is there enough text content on the web site?
Theory:
As a main principle, a page should have between 500 and 600 words, as a minimum (this refers to the unique content of a page, thus eliminating texts from the menus, header, footer, sidebar); you can get good positioning in the search engines even with only a few sentences, and even very very long articles (tens of pages can rank well);
The main idea is different – be them long or not, it is optimal to have a lot of content on the web site: lots of pages, or pictures or video clips; if you choose to put 5 clips in a single article or if you create 5 distinct articles for each video clip, the principle is to have enough content on the web site; Google loves consistent content on a web site;
Attention! I don’t advise you with this advice to visit Wikipedia and random copy pages with texts and images, just to get some content; the content should be unique to your web site, or at least it shouldn’t appear on your web site after it first originated on other web sites (in other words, you should not copy texts from other parts);
Other than this, the more content, the better; of course, the visitors should have a pleasant experience on your web site, in other words: write quality texts and texts that are related to the profile of your web site (if your web site is on vegetables, write quality texts on vegetables, not other things);
Notes:
i. If you don’t update (by constantly adding new things) the pages on your web site, it’s best to have a section on your web site dedicated to fresh content – a “blog” section or one with news, or a section with press releases, or an articles section; you shouldn’t suddenly create 100 pages and then just dump the web site; constantly add new content;
ii. You can create content yourselves, automatically (for example, you can create, based on some technical and unclear descriptions of a product, an automated, detailed description, with more words, of that product), or you can ask others to write content (the visitors of your web site can leave comments, write reviews or testimonials);
iii. The new content doesn’t necessarily need to be in your web site; a sustained activity in the social networks ((Twitter, Facebook, LinkedIn) can, also, be a factor which is part of your overall presence on the web; it is of a smaller importance, though;

Practice:
You have plenty of unique content on the web site, mostly photos and descriptions of the products; you could use a blog which also should allow comments; you might also use some comments on the main web site and a presence on the social networks;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): average problems;

Things to do next:
Put a blog, be active social networks, and allow comments on both the blog and web site;

A7.2. You should have proper <h1>-<h6> tags
Theory:
If you open a book and start reading, you have a pretty clear picture what are the titles of the chapters and subchapters, based on font sizes;
A webpage is more complicated, though; you can open a web site, and you have on the left an interesting picture, on the right a big text which invites you to click it, there is also a title, you have more subtitles, all with different colors; a visitor, being human, may have problems identifying the title of the page;
How about the search engines? We are talking here about robots, that can’t “see” what’s in a picture, therefore can’t tell what that picture stands for – if it’s a page title or a photograph; also, a very big text on the page, written in black, can be less important than a smaller text written in bright blue; which of the two is the page title;
(Also) in the intent of solving this problem, hn tags (from <h1> to <h6>, values greater than h6 are irrelevant) have been invented;
With these tags, it can be noted:
i. With h1 the title of the page – very important;
ii. With h2 a subtitle – less important;
iii. With h3 a subtitle even less important;
By using CSS, web programmers can color, size and arrange in the page as they wish these titles (so, they are happy), while the engines know, very clearly, what is the title of the page and which are sub-titles, each for their importance (so, the engines are also happy); you can read here – h1-hn tags explained – additional information on using the tag;
As you might expect, the words which are marked as the title of the page using these tags are interpreted by the search engines are more important than the rest of the page;

As a SEO importance, h1 tag is very important, h2 a bit lesser, and tags from h3 and below are mostly useful for usability purposes, the search engines give them a minimal importance; some pages, such as those which are very long, can also have the <h6> tag; in a resource previously recommended (search engine ranking factors):
a. H1 tag: Keyword Use as the First Word(s) in the H1 Tag – 45% moderate importance;
b. H2-H6 tags: Keyword Use in other Headline Tags (H2–H6) – 35% low importance;
So, you should use at least H1 tag for the titles in the page for each page of the web site; if you have subtitles and sub-sub-titles in some pages, you should use h2-h6 for those cases;

A commonly seen practice (which I don’t recommend) is putting hn tags on:
a. The logo – this is pointless, an image can’t be the title of the page;
b. Elements which are repeated in all of the web site – menus, sidebar elements; I don’t recommend these practices, they do very little to help positioning the current page;

Practice:
On the main web site, you use h2 tag for newsletter and new products title, h3 tag for social networks, h4 tag for new products listing:
<h2>Newsletter</h2>
<h2>Produse noi</h2>

<h3><img src=”images/phone_logo.png” style=”float:right”/>Comanda telefonic la 0728 926.408 intre orele 08:00 – 17:00 <br />de L-V</h3>

<h3>Pentru cei ce fac din fiecare zi o zi importanta, <br /> Pentru cei care acorda atentie fiecarui detaliu, <br /> CONDRA ii asigura ca nimic nu a fost uitat</h3>

<h4>BOTANY BAY</h4>
<h4>FLYING DUTCHMAN</h4>
<h4>GOOD HOPE</h4>

On a category page, h2 is for the title of the page and h4 for listing the items:
<h2>SLIM FIT_ Maneca Scurta</h2>

<h4><a href=”produs.php?pid=192″>JAMES PURPLE</a></h4>
<h4><a href=”produs.php?pid=134″>BONO WHITE</a></h4>

On a product page, you use h2 for the name of the product:
<h2>BLOOM BLUE</h2>

Regarding the h4 tag, you use it everywhere in the web site to display a text for a non-loading .swf file (the slider at the top):
<h4>Content on this page requires a newer version of Adobe Flash Player.</h4>

Hints:
– Avoid using img inside an hn tag, like you do on the homepage with the phone logo;
– Put h1 for the title of the product, name of the category and for a very important text on the homepage (like “Camasi barbatesti”, for example);
– Don’t use h4 tags, they’re hardly any relevant; if you use it for display purposes only, it may be fine;
– Don’t put too many hn tags (for h4 tags you have a lot of products on the homepage and category pages; avoid this);

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): average problems:

Things to do next:
I’d put the tags on the pages better, according to the instructions given;

(top)

A8. Meta tags (meta description, meta keyword)

A8.1. <meta name=”keywords”> and keywords in the web site – problems with keyword stuffing?
Theory:
<meta name=”keywords”> was a tag which was useful in the past; it was used to inform the search engine what the page is about; so, if you were doing a page on Alain DELON, it was somewhat useful to inform the search engine that the page is dedicated to French cinematography:
<meta name=”keywords” content=”alain delon, movie, movies, french, france, fan, fans, filming, actor, cinema”/>
With time, this tag got to be so abused, that the search engines decided to ignore the tag completely; Google, for example, can tell, by itself, with today’s technology, that a page on Alain DELON refers to French cinematography and can correctly establish categories for a page; on the other hand, the web page creators tend to catalog a page (also) according to some unrealistic criteria (for example, I can put in the list of keywords “buy movie”, although on the web site I may not sell movies with Alain DELON, but I do have advertising on the web site with selling movies and I am really interested that my web site is visited by people who want to buy movies, so that I would earn money from advertising; another method for abusing this tag was, in the first years of using the meta tag, the obsessive repeat of keywords, with slight variations: movie, movies, the movies, filming, films, shooting); at the end of the day, you should remember that the tag is not relevant anymore;
But the initially created wave (in the first years of the search engines) still persists; you can still find, today, web page creators who will still add the keywords tag, in order to attract visits; if the engines don’t give you any boost by using such techniques, they can, instead, penalize you based on this fact – if you abuse the tag (by repeating keywords), Google may consider that you are trying to SPAM it; so, the tag is not helpful if you do create it, but if you use it in a bad way, you may get into problems, so it’s best to eliminate it altogether;

While the meta keywords tag is hidden to typical visitors and it’s visible just for the search engines, on the other hand the obsessive repetition, with no real sense for the user, just to get better positioning in the search engines of some keywords is very visible to the visitors (alongside to the search engines); this process is called”keyword stuffing” and it is a weak (and inefficient and unethical) method of promotion; my suggestion is not to repeat in a page a keyword obsessively just for positioning; of course, in a page about potatoes, you will naturally speak about this vegetable, but, as a main idea, don’t push things;

Practice:
You have the meta keywords tag implemented on the web site and it’s just this:
<meta name=”keywords” content=”camasi, camasi barbati, camasi slim fit, camasi cambrate, camasi sport, camasi club, camasi clasice, camasi birou, camasi la moda, camasi online, camasi elegante, magazin camasi” />
Yes, this is correct, you repeat the above tag on each page of the web site;

In my opinion, it would be best to remove the tag; you are very close to the thin line which separates spam from legit data – you repeat “camasi” way too much and don’t have an unique keywords tag for each page (you shouldn’t do this, but it would be a lesser signal that you are spamming);

Regarding keyword stuffing within the content itself, I haven’t noticed you doing such things;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): average problems;

Things to do next:
Remove the meta keywords tag;
Keep doing the good work (avoid the keyword stuffing in the content area);

A8.2. <meta name=”description”> – it should exist, invite the viewer to click and have a proper length
Theory:
First of all, the meta description tag should exist; it would be very useful to have unique descriptions on all of the pages of the web site;
Why is it important to have a web site description? Not only does Google take that description and shows it in some searches as a snippet (small texts shown in the search results, under the title of the page – details here: Google snippets), Facebook also takes that description and uses it when someone sends a random page from your web site (the link send via Facebook has a description, which is exactly that description from the meta tag); it is thus useful to have a description for Google, also for the situation when someone recommends your page via Facebook, to give just two examples;

How can you make a description with good usability, so that the people are invited to click on the Google or Facebook result?
i. Directly – invite the people to visit your web site: “Visit our web site” / “Click here” / “We invite you on our web site” / “See our web page”;
ii. By presenting benefits, giving a motivation to enter the web site – “Our web site offers you X and Y and Z”, “The advantage of using the web site is A, B, C”, “If you enter the web site, you can register to …”, “The visitors of the web site can benefit from …”;
The description should be made thinking about the people who search for a term on Google, find your web site, and two rows of text appear; you should make the text thinking of those people on the Internet, and things will be fine; in other words, avoid writing:
“Our web site sells potatoes; the potatoes we sell are cheap; we have other vegetables, but we sell mainly potatoes; buy potatoes”,
or, even worse:
“potatoes, small potato, potatoes, potato food, packed potatoes, visit our web site for the latest potatoes”,
with the thought of obsessively repeating a keyword in order to have better positioning in the search engines; no important search engine (attention!, no one!) uses the description for positioning in search results; you should do the descriptions for people, ignore the web spiders; this doesn’t mean you shouldn’t use keywords, on the contrary, you can use it and even repeat it (but don’t overdo it); on the other hand, you should write the page description thinking about the visitors; a good example:
“Online potatoes: web site about fresh vegetables in Romania. In order to choose from the biggest offer of online potatoes, visit our web site.”

Finally, we advise you to keep the description under 155 characters (spaces included), to fit in the space allocated by the search engines (otherwise, it will be interrupted by suspension points “…”);

As a general idea, the description for the main page is more important, because it is most often visible in searches;

Practice:
You have the meta description tag implemented in the web site like this:
<meta name=”description” content=”Condra va ofera cele mai cool camasi barbati online fabricate in Romania” />
(+important) Yes, I’m correct, all the pages have the same description, even the homepage doesn’t have a unique description; it’s also a bit too short;

If I were you, I’d either compose a meta description for each page of the web site, or I’d make it to take automatically, considering that you are an online store:
[name of the product] – [name of the category]: [small description for all of the products in that specific category] [invitation to enter the web site];

Example:
Good hope shirt – men regular fit shirts: for the latest shirts with short sleeves, visit our web site; your top choice for a shirt bought in Romania;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): big problems;

Things to do next:
I would make unique description for each page of the web site, or at least create them automatically, this is an important issue;
At most, remove the duplicate description from your web site completely (leave it only for the homepage);
Also, make the description larger, no more than 155 characters with spaces;

A8.3. <meta name=”robots”> – it shouldn’t exist, or if it exists, it should be there with a good implementation
Theory:
In the text above, “robots” can be replaced by a specific robot (the most important ones, corresponding to the different search engines, are: googlebot, googlebot-image, msnbot, msnbot-media, msnbot-products, mozilla, ia_archiver, ia_archiver-web.archive.org, yahoo-blogs, yahoo-mmaudvid, yahooseeker), based on the template:
<meta name=”googlebot”>
A different argument follows: noindex, nofollow, noarchive, noodp, noydir, nosnippet;
The principle is, if any of the above sounds very complicated, you should either avoid completely these tags (you can do just fine without them), or, if you do use them, you should inform yourselves more on Search Engine Land;
It is advisable to have tags referring to robots only for the pages to which you don’t wish to give access to search engines (admin pages could be examples); bad usage of the tags can lead to problems (lack of indexing, complete absence from the search engines, etc.);

Practice:
You do use the meta tag in the web site, and you allow all engines access, this is good;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): zero problems;

Things to do next:
I would try, in the future also, not to block access to the robots;
(top)

A9. Redirects and localization

A9.1. How do the HTTP headers for a typical page? How about for a 404 error page?
Theory:
Using the FireFox add-on Live HTTP headers I can see what is the code sent by the web site to the browser;
Most of the time the 302 code is returned for the files which haven’t been modified since the last access and 200 code for the standard responses; for codes from the classes 1xx, 2xx, 3xx everything is fine;
Up until now, no problem; on the other hand, if I codes from the 4xx class are returned (404 most of the times) or 5xx there can be problems – it means that there are elements in the page that don’t load;
On the other hand, if I access an inexistent page on the web site, I should obtain at least once a 404 code, which will confirm me that the page doesn’t exist;

Practice:
Typical pages return the appropriate codes, the 404 page returns a 404 code;
I have presented above the problems with 404 pages which don’t display properly;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): zero problems;

Things to do next:
In the future, you should keep doing what you do now to have proper rankings in the search engines;

A9.2. The country of your IP and what top-level domain does the web site have
Theory:
If the server that hosts your web site has an IP which is attributed to Romania, there is an increased chance that the web site is in Romanian;
The problem of locating a web site (Google needs to know which country is a web site from) is minor for a web site with a lot of content; the search engines can automatically detect what language does a web site have and locate that web site; that criteria would have been important, for example, for a web site in English from Great Britain – in that case it is relevant to know if the web site is addressed to a Great Britain, Australia or United States target market, for example;
Even for the situation in which a web site doesn’t have a lot of indexable text (it is a photo web site, or it is a web site almost fully in Flash), there is a satisfying solution: using Google Webmasters Tools and their equivalent from Bing and Yahoo! to attribute a country to a web site;
As a conclusion, an IP from a certain country does help, but it is a minor element and can easily be substituted by alternative solutions;

Regarding the top-level domain (TLD) used (the last letters from the domain, like “.com”/”.eu”/”.cn”/etc.), it is preferable that a web site from a specific country has the extension of its country; exceptions are the web sites from the USA, which tend to use .com TLDs and web sites from the European Union, which can either use their national TLDs or .eu (for the whole Union) domain;

Practice:
Your IP is from Romania (see above);
camasi.com.ro – the TLD is .ro (Romania);

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): zero problems;

Things to do next:
You should maintain the current situation – avoid putting the web site on an external domain and IP from a different country;

A9.3. The presence of a physical address in the contact page (it would be preferable to be also be present in the footer)
Theory:
A good indicator of the country a web site is attributed to is the physical address presented in the contact page and the presence of the address in the footer of the web site; in the address it is best to specify the town, street, postal code, perhaps even the country;
The search engines will easier attribute a web site to a specific country if they will meet these two elements;
The presence of a physical address is also an usability criteria (the users trust a web site to which they can have a rapport not only virtually – “a web site” -, but also physically – “a physical address”);

Practice:
You have the address in the contact page, and you specify the country listed;

You don’t have the address in the footer;

The contact page on the blog is really scarce;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): small problems;

Things to do next:
I would put the address in the footer and fill in the other details for the contact page on the blog;

A9.4. Register to Google Local
Theory:
I will test the registration to Google Local via a search for the brandname at Google Maps;
Your best option is to be present in the result list, and if not, you should register to Google Local;
Why register? You can obtain good positioning for some local searches; coming back to the example with “potatoes”:
cheap potatoes kolkata india
will show results which contain the phrase “cheap potatoes” in the area of Kolkata, in India;
How to register? You can do this by using Google Maps (create a KML file and add it to Google Webmasters Tools, or, alternatively, add a certain address on Google Maps);

Practice:
You show up first in the results;

I see you also have an ad displayed;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): zero problems;

Things to do next:
Great job, nothing to do here;
(top)

A10. Links from you to yourselves (internal links)

A10.1. The number of links, be them to yourselves or to other web sites should not be very big
Theory:
In the opinion of Matt CUTTS, manager of webspam team at Google, it is best that any web site has on a given page less than 100 links to other pages (internal, to the same web site, or external, to other web sites); it is not, by all means, mandatory to have less than 100 links; the motivation is simple – a link will bring more value if there are only 9 other links on the page, rather than 999 other links on the page; it is preferable to have a smaller number of links on the page, to provide good value to other pages;
How to check yourselves how many links are there in a page? Install Opera browser, open a page from your web site in Opera, let it load fully, Tools => Links, copy all of the links in Microsoft Excel and see how many links you have;
On the other hand, I recommend you to much rather use one of the two sites below:

As a side note, it’s best to underline the links and write them with a color different than the rest of the texts in the web site; a good practice is using blue colored links;

Practice:
Analyzed pages:
The main page: 29 links;
SLIM FIT_ Maneca Scurta: 29 links;
SLIM FIT_ Maneca Lunga: 31 links;
ZORBING: 23 links;
PANDORA: 23 links;

Conclusion: you don’t have too many links on any of the analyzed web pages;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): zero problems;

Things to do next:
You should keep on having less than 120-130 links per page for the web site pages;

A10.2. The anchor text should correspond with the destination page – internal links
Theory:
On the Internet you probably saw a lot of times links with the anchor text: “click here”; well, these links are not recommended; it is preferably to write: See the page “How to learn better in 5 steps and 3 hours of sleep”, rather than “Click here for details”;
How to check yourselves the anchor texts in a page? Install Opera browser, open a page on your web site in Opera, let it load completely, Tools => Links, and in the left column you will see the anchor text.
Also see a search such as:
[“click here” site:camasi.com.ro]

Practice:
In general, you use a correct anchor text for internal links;

I’ve seen with “find out more” on the homepage, linking to the About us page, but this is about it;

A search for “click here” site:camasi.com.ro provided zero relevant results;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): small problems;

Things to do next:
I’d focus on not using “find out more” on the texts on both the web site and the blog.
(top)

A11. Links from you to others (to exterior)

A11.1. Checking nofollow links
Theory:
Regular links send value; if I create a A web site and make a link to another web site – B -, the B web site will benefit from my link;

On the other hand, if I create an A web site and I have a blog on it and I have as commentators persons who wish to promote their web sites via SPAM, and my commentators add links in the comments they add to the blog, perhaps I don’t wish those links to have a value for the search engines; in other words, I can leave the link on the web site, it can be clickable, but I want to inform Google – “I haven’t personally checked those links, ignore them, they can be low quality links”; for this purpose nofollow was invented and for this aim it can be still useful today;

Nofollow was also used with different purposes – to handle links on your own web site; in other words, I can let Google know – “product pages are very important, but, on the other hand, the pages with terms and conditions are very weak, you should ignore them” and for this I can use nofollow on the pages with terms and conditions, for example; this technique is called “Page Rank sculpting“;

All-in-all, the nofollow technique is a relatively advanced technique and, used wrongly, it can do more bad things than good (Google can’t crawl some pages, the web site is only partially indexed, me, as a site owner, have the – wrong!, one might add – impression, that I know better than Google which pages are more important and which not); to avoid the problems of incorrect usage of nofollow, Google took some measures which, in a few words, advise you:
a. To keep using nofollow for blog comments, comments on which you don’t constantly check how do the web sites of the people commenting on your blog look like (do they use ethical principles of promotion, or are they in a somewhat grey area?);
b. To avoid using nofollow on your web site as a page sculpting technique;

Right now, the nofollow technique, still used according to some old Page Sculpting techniques, can be a dangerous method of influencing rankings;

You can still use nofollow for web sites which you don’t manually check (the web sites of those commenting on the blog of the company, for example);

In the analysis I will also analyze the links, by using Xenu’s Link Sleuth; the general report will show:

  • Broken links, ordered by link;
  • Broken links, ordered by page;
  • List of redirected URLs;
  • List of valid URLs you can submit to a search engine;
  • Site Map of HTML pages with a Title;
  • Broken page-local links;
  • Orphan files;
  • Statistics for managers;

Things to fix:  “broken” and “orphans” types of links; also, you should have a look at the URLs which redirect from one URL to the other – problems may also show from this part also;

Practice:
You have these external links on the homepage of your web site:
Recommendation service for each product – www.addthis.com/bookmark.php?v=250&pub=camasicomro
External Flash player – www.adobe.com/go/getflashplayer
Social networks accounts – www.facebook.com/camasicomro www.twitter.com/camasicomro
Online shopping services related to your business:
www.goshopping.com/ro/cat–6303–pret–Imbracaminte_pentru_barbati–sortby–tip–wordby–Camasi.html
www.shopmania.ro/
www.smartbuy.ro/

I haven’t seen nofollow links on your web site, so you don’t do PageSculpting on your web site;

Xenu’s Link Sleuth report, shows, as a general rule:

Correct internal URLs, by MIME type:

MIME typecount% countΣ sizeΣ size (KB)% sizemin sizemax sizeØ sizeØ size (KB)Ø time
text/html96 URLs20.47%1266819 Bytes(1237 KB)2.10%725 Bytes21415 Bytes13196 Bytes(12 KB)3.563
text/css3 URLs0.64%17590 Bytes(17 KB)0.03%4448 Bytes7566 Bytes5863 Bytes(5 KB)
application/x-javascript9 URLs1.92%257640 Bytes(251 KB)0.43%579 Bytes102871 Bytes28626 Bytes(27 KB)
application/x-shockwave-flash7 URLs1.49%215110 Bytes(210 KB)0.36%30730 Bytes30730 Bytes30730 Bytes(30 KB)
image/jpeg305 URLs65.03%55879117 Bytes(54569 KB)92.68%553 Bytes890016 Bytes183210 Bytes(178 KB)
image/png4 URLs0.85%44626 Bytes(43 KB)0.07%166 Bytes18546 Bytes11156 Bytes(10 KB)
image/gif39 URLs8.32%19746 Bytes(19 KB)0.03%49 Bytes3109 Bytes506 Bytes(0 KB)
application/pdf6 URLs1.28%2593220 Bytes(2532 KB)4.30%38558 Bytes1021215 Bytes432203 Bytes(422 KB)
Total469 URLs100.00%60293868 Bytes(58880 KB)100.00%

All pages, by result type:

ok574 URLs99.31%
mail host ok1 URLs0.17%
not found2 URLs0.35%
skip type1 URLs0.17%
Total578 URLs100.00%

You should check out the detailed report;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): Zero problems;

Things to do next:
Everything is fine, in the future you should avoid doing Page Sculpting;

Note for the future: having comments on the blog with nofollow is a good thing;

A11.2. The anchor text should correspond with the destination page – outside links
Theory:
See above the theory for internal links;

Practice:
You use correctly the anchor text for external links; I am satisfied with the results on this aspect;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): zero problems;

Things to do next:
Things are fine;

A11.3. External links should have a connection with your web site
Theory:
If you have a web site on vegetables and potatoes, it makes sense to have links to web sites on the food industry, agriculture, transportation of fruits and vegetables;
If, on the other hand, most of your links to the outside lead to web sites about PC maintenance, this means that you have a problem with the profile of the links;

Ideally, the external links should lead to web sites that are connected, if not with the web site as a whole, at least with the subject on the page the link goes from;

I hope it is not necessary to say that there are web sites that deserve more trust (a site of a hospital) than others (web sites that sell Viagra);
So, you should pay attention to whom you give a link;

Practice:
While some links that you have are rather general (add this, flash player, social networking account), some external a connection with your business (go shopping, ShopMania, SmartBuy);

In the future, I’d focus more on finding some relevant businesses in your field to which to link to;

If you will have a blog, you will do a good job if you find good resources to link to;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): low gravity problems;

Things to do next:
In the future, you should put more links on your web site, which are related to your field of activity;
(top)

A12. Links from others to yourself (links from the exterior)

A12.1 Should be numerous, be from quality web sites, be on web sites that are related to you, should not be all on the homepage, have a good anchor text, be a natural profile of links
I hope it is clear to you that links, if there are many, help you to make your link profile be natural, helps you with Page Rank, helps you show that you are on the web; and this is only referring to the number; but other things count also;

What does it mean to get links from quality web sites?
i. The links should be related to your web site (see above – if you have a web site about potatoes, you should obtain links either from fruits and vegetables web sites, either from a single article on a web site, article that is about vegetables);
ii. The web sites that link to you should have, in turn a lot of links;
iii. (vicious circle) The links you obtain should be from web sites that, in their turn, are quality web sites;
Therefore, if you get an article in a national newspaper, and that article has a link to your web site, you have just obtained a good quality link (the national newspaper will write the article on a subject that is related to you, the online version of the newspaper has a lot of links from other web sites and that web site generally has quality links);
If, on the other hand, the news article is copied illegally by 100 obscure news sources, which don’t do anything but copy (scrape) content from the Internet, you just got 100 pretty weak links;
In other words, it is important that a source like “The New York Times” provides you with a link, but it’s less important that a web site created just to scrape content on other web sites gives a link;
Of course, the links to your web site should be on pages that are related to your web site (I have detailed this above);
Another thing that is important is that links are not, all of them, to the homepage; the usual visitors enter the web sites, read you, open an article, read, open another page; at some point the visitors will find a page/an article/a section which they like; they will send it forward, perhaps, on social networks; but they will send that specific page, not any page on the web site; and, most of all, will not send the homepage of your web site;
But the site owner, when they promote a web site, tend to use the main page: “Visit the web site camasi.com.ro today …, and find out …, and receive …”; the problem is that a link profile in which 90% of the links to your web site are exclusively to the main page looks unnatural; you want positioning in Google and other search engines not only for the homepage, but to other web sites also;

Regarding the anchor text – if you have a web site on cucumbers:
www.mywebsite.com/vegetables/cucumbers,
when a client of your offers you to give you a link, advise you not to give a link as:
“To find excellent vegetables, click here”,
but rather give you a links as:
“To find excellent vegetables, see Cucumbers – web site with vegetables of X Company”;
The second version of the link is useful for both the user – who will better understand what the destination web site is about – and Google, who can better interpret the link;

How does a natural link profile look like?
a. You should have links from diverse sources – the local City Hall (which displays the list of companies in the town), clients web sites (which describe the experience with you), social networks (someone recommends a link on Twitter, another person uses Facebook, someone is filming you and puts your company on YouTube); I also include in here the fact that the links should come from as many web sites as possible (so if you have a link in the footer of site X and that X web site has 1,000 pages, OK, you have 1.000 links, but those links are not from diverse sources; those 1,000 links would be much better use to if they were distributed on 100 distinct web sites, 10 links per web site);
b. You should have links that are not exclusively dofollow (Wikipedia has nofollow, Twitter has nofollow, the list continues); if you have only dofollow links, you may leave the impression that you have influenced the process by which you got the links (it looks like an artificial process, and the search engines don’t appreciate this);
c. You should have links with different anchor text – if all the links to your web site look like this:
Cucumbers – enter here to buy cheap and great cucumbers, from Thailand,
may be a sign that you have manipulated the process of obtaining links; it’s best that some links to your web site differ, you shouldn’t have all of the links to your web site with the same anchor text;
d. You shouldn’t suddenly obtain lots and lots of links and then suddenly stop – if you add your web site into 1,000 web directories within 3 day, distribute very quickly 10 press releases in English on dozens of press release web sites and then you suddenly stop, this may be interpreted as a sign that you have used not very good methods of link building; my recommendation would be to obtain links step by step and constantly; also, you shouldn’t obtain more than 10-20 links in a given day; if you do get 20 links per day, you should try and keep this number for a lot of time; there are exceptions to this – should you write a phenomenally, revolutionary article and this article gets recommended a lot, you appear on TV, you’re all over media, be it online or offline, then you will get a lot of links fast; but they are quality links and those links generate, in turn, lots of links; yes – this kind of practices are good;
e. You shouldn’t have most of your links from obscure sources – if you submit your web site to web directories, choose them with care; if you send an article to be published on web sites of press releases, you should check the site on which you publish information; always ask yourselves – if I were a star, would I associate my brand with this web site? Would I like to have my name appear on the web site? Take good care of your brand; avoid getting links that can be obtained by anyone, even those that do spamming on the Internet;
f. It is useful not to take part to, at least not excessively, link exchange schemes; you can do such practices, but on a limited basis;

How can I see the links to you?

How can you see much more links?
Explanations for the number of links displayed in Google: that indicator is not necessarily a realistic indicator; what I suggest to find out an better estimation (this is still an estimation) of the number of links is to look at Google Webmasters Tools for details; Google Webmasters Tools only displays approximately 70-80% of the links, but it is a much more realistic indicator than one of the searches done above;
You also have the options offered by Bing and Yahoo! for webmasters; you can have access to much more links in there; for example, Yahoo! displays as a standard only 1,000 links, and counts also the links with nofollow, which aren’t relevant for rankings, and counts the links on the same web site repeatedly; on the other hand, Yahoo! Site Explorer will display the complete list of links;

Practice:
Tests (I advise you to use the option to download files as CSV/TSV to open them in Microsoft Excel):

I’ve looked with Google Webmasters Tools and saw 364 links to your web site; 170 of those are to the homepage;

You also have links to these pages:
/docs/instructiuni-de-intretinere.pdf 21
/produs.php?pid=119 14
/servicii.php 13
/produs.php?pid=203 13

Looking on the links to your web site, I can say that:
a. You have very few links; this is the most important problem;
b. You also lack quality links; you only have quite a few links from quality web sites (Lumea SEO PPC, Styleshopper.ro, Condra, CityMaps.ro);
c. Not all of the links are from pages which are related to you (wonder.ro, for example);
d. You have a lot of links to the homepage;
e. The anchor text doesn’t have a lot of variations, but it is at a fine level;
f. The link profile looks semi-natural, you lack links from the community;

Regarding the anchor text, you have links to the web site with the following variations:
camasi barbati product of condra
costin iatan
(img)
[No Anchor Text]
http://www.camasi.com.ro
www.camasi.com.ro
camasi
aici

(+important) As a problem: you have relatively few links and of not great quality; I can say that some of the links that you do have are related to your business;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): big problems;

Things to do next:
I would improve the links profile, like this: I would increase the number of links, their quality, their relevance and the anchor text;

A12.2 Links from Twitter social network to your web site
Theory:
Why is it important to have links to your from Twitter? Twitter is, in most of the cases, a social network on which regular people post links that they find interesting; a sustained presence in such a social network can be a sign that you have a natural link profile, that people consider your links useful, and, willingly, they give links to you; last, but not least, a Twitter presence can be useful in other purposes than strictly SEO: you can interact with your target audience, you can get visits from the participants at Twitter discussions;
More recently, the presence of a web site in Twitter can positively affect the positioning in Google and Bing search engines;

I will test using BackTweets, Social Mention, Twitrratr and with Twitter.
Obviously, you should have as many links from Twitter as possible;

Practice:
BackTweets: I have found 3 tweets to your website; Social Mention has 42 social mentions to you, most important being the Flickr account; Twitrratr has no mentions, while Twitter: 0 tweets; it is a pretty poor performance, considering that your only three tweets are all links from your own Twitter account;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): average problem;

Things to do next:
I would try to raise the support of community to the web site; you can put buttons like “Tweet this!”, if you consider that your audience is using Twitter; or you can just test – put a button like this and see the results;

You could also use a more active presence on other social networks, to increase your presence in the Social Mention web site;
(top)

A13. Analysis of the traffic and of the searches

A13.1. Alexa.com + public web analytics data
Theory:
If you use within the web site public traffic analysis software, I will analyze the traffic from these web sites;

If your traffic is private, I can base my analysis on the extrapolations made by Alexa.com;
It is important to know how does Alexa get the presented data: for all the users who have installed Alexa toolbar, they get information on the visited web sites; using the statistics got from the users who have installed that toolbar, (approximate) data is obtained, for all of the users of a specific country (or international)

The problem is, of course, that only a small percentage of the Internet users use the Alexa toolbar, and those that do use it (it is useful to get statistics on the visited web site, for example) have a tendency to be persons with advanced IT knowledge (therefore, the visitors profile is not necessarily the typical profile of an Internet user);

Nevertheless, it is an useful approximation, at least to have a general idea on the traffic data;

Practice:
According to Alexa.com, the web site camasi.com.ro:
– It is the 795,041st most visited web site in the world;
– It is the 5,098th most visited web site in Romania;

These are relatively weak results;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): average;

Things to do next:
Try to increase the traffic to your web site;

A14.2. Being present in Google Trends
Theory:
Google Trends can provide information on the typical profile of the web site visitors; it is only available for web sites with a high volume of searches and displays in Google (usually, large and very large web sites);

Practice:
Google Trends doesn’t offer web site about the web site;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): zero problems;

Things to do next:
You can’t do a too many things about it, you could only obtain more searches to the web site, and this is very little within in your control;
(top)

A14. Semantic HTML

A14.1. Pages that load slowly? Testing the page sizes + the transfer speed
Theory:
No matter what the speed of the Internet for computer/laptop access is in your target markets, you should still focus on trying to improve the speed of your web site; take into account that some users access a web site on mobile devices, or share the same connection of Internet on many computers; another thing to note is that even if the transfer speed is very high, a file smaller with 50% will reduce the loading time from a (theoretical) 4 seconds to another (theoretical) 2 seconds; this means that even if the transfer is very fast, a page with small size will load even faster; therefore, if you can lower the page size – why not do it?

I consider to be interesting this quote by Google:

The average web page takes up 320 KB on the wire (Google took into account the embedded resources such as images, scripts and stylesheets). Only two-thirds of the compressible material on a page is actually compressed. In 80% of pages, 10 or more resources are loaded from a single host.

So, a typical web page has, with images and scripts, around 320 KB.

How do I test the page size?
I will save more pages with FireFox (Web page, complete); I will then say what the values of the sizes of HTML files were and then I will include the average size of the pages, with images, scripts, Flash files, as they were saved;

It is recommendable that the size of the HTML file (without any attachments: CSS, JavaScripts, images etc.) should not surpass 150 kilobytes;

Two notes:
i. Saving a page from the browser doesn’t always reflect the exact size; first of all, the same page saved in FireFox, Opera, Internet Explorer, Chrome, can have slight variations of size; on the other hand, if I do an average for 5 random pages on the web site, I will get some good results; secondly, there are things (JavaScripts, embedded video clips, video players, XML files) which (potentially) won’t be saved on the hard drive; I know no good solution for this;
ii. If I visit a web site, I visit the main page for the first time, the complete page will load (with the HTML file, attachments, Flash, JavaScript, CSS, etc.); if I visit another page of the web site, some of the elements of the web site will not load, therefore the load time for the second page in the web site is, most of the times, smaller;
Then, using the add-on Page Speed from Google and Yahoo! YSlow from Yahoo! (it is used only in FireFox, as an add-on for a different add-on – FireBug) I will do a test of the page and export the results;

How do I test the speed of the web site?
Ideally, for a web site with a target located in a specific country, the web site speed should be tested using some important Internet hubs in that country, using Internet service providers which are representative for that country, and do an average of the results;

My solution is a bit simplified – I will test with aptimize;

I will also used WebPagetest web site to evaluate the site’s performance;

SEOmoz solutions to increase the site speed;

Practice:
I have saved more pages with FireFox and found pages with sizes of 15, 14, 16, 16, 16 kilobytes (the HTML file only); if we add to this the rest of files (JavaScript, CSS, images), the average page size is somewhere at 687 KB; it is pretty good for the HTML file only, but the total size of the pages is large;

Regarding PageSpeed add-on, I have exported the results to ShowSlow.com for the homepage and a product page on your web site, you can see the details there;
You have better than average scores, 69-71 out of 100, as important problems I could note:

Add Expires headers
Compress components with gzip
Make JavaScript and CSS external
Leverage browser caching
Combine external JavaScript
Avoid document.write
Parallelize downloads across hostnames
Serve static content from a cookieless domain
Enable compression
Minify JavaScript
Minify CSS
Defer loading of JavaScript

Overall though, these are good results;

Testing with aptimize, I have got the results from the

attached PDF file – Aptimize results.
– The web site speed, with uncompressed data, loaded for the first time is 20.2 seconds (total loading time for the web site);
– If I return to the web site, only some files of the web site will reload, and the speed drops to 9.7 seconds;

Also, the results of the testing done on WebPagetest to evaluate the site’s performance show:

Document CompleteFully Loaded
Load TimeFirst ByteStart RenderTimeRequestsBytes InTimeRequestsBytes In
First View16.228s1.659s5.412s16.228s552,120 KB16.509s582,122 KB
Repeat View3.989s0.561s2.263s3.989s5019 KB4.436s5421 KB

I’ve also had some additional times for loading at the A2.3. Navigation test (crawling) section;

These are satisfying times, not perfect, but good;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): small problems;

Things to do next:
You should check the errors got with Google PageSpeed and Yahoo! YSlow and solve them, as good as possible;

A14.2. Alternate text (ALT) for images and filenames
Theory:
For image optimization, alternate text is a very good solution;
You can obtain traffic from the search engines, based on image search, if you optimize images;
How to easily check the alternate text of images? In Opera: View => Images => No images;

Also, the name of an image (photograph, photo) from the web site should reflect what the image is about; in other words, you should not name an image:
DSC00004.gif,
but, instead, give it a real name:
Last-meeting-advisory-board-morning-SurfNet.gif

Practice:
The images on your web site generally have alternate text;
Well done! I have found images, especially in the navigation of the page, without alternate texts, but as a general rule, you do use alternate texts;
Unfortunately, you don’t use specific ALT text for each image, you just put a general ALT text for the image, using the product title; if I were you, I’d create unique ALT text for each image;

Regarding the filenames, you don’t fulfill the request very well, either: 1078_1305107719.jpg.

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): average problems;

Things to do next:
I would try to put unique alternate texts on all the images on your web site and, in the future, keep having alternate texts for images;

A14.3. Pages with poorly formatted text;
Theory:
Generally, text formatting helps with easy reading, with one condition: do not overdo it (for example, bolding excessively words in the web site, or using the hn tags abusive);

Practice:
I haven’t met this problem; generally you don’t have problem, you should keep on doing things this way;

How big the problem is:
The gravity of the issues (very big problems, big problems, medium gravity, low gravity problems, zero problems): small problems;

Things to do next:
Keep on doing the current method of formatting the text;

(top)

B. Other good practices, other bad practices

Theory:
a. Good practices
I’ll list below a list with good practices in handling a web site:
– If you target a keyword for a certain page, you should put the keyword first in the title of the page – <title>;
– If you wish to start a web site from scratch, it’s best that the URL of the domain to be a targeted keyword:
www.skodacars.com
www.potatoesbangladesh.com
www.qualityseo.eu
www.modernweddings.co.uk
(as a not – the dash “-” is unnecessary);
– The <h1> tag is best to start with the targeted keyword;
– The texts which you put for links to your own pages (link to “About us” page, for example) and to pages from outside (link to a page from outside) are very important; it’s best that those links are not:
“Click here for details”,
but instead:
“See the page – silkworm rearing – for details”;
– A page with texts should contain, at a minimum, 4-500 words (only the content, ignoring the header, the menu, sidebar, footer etc.);
– Do you wish to have top positioning for a page? You need:
i. Links to the web site, in general;
ii. Links to that specific page, specifically;
It’s best that the links:
i. Are connected with the domain of the web site;
ii. Are connected with the domain of the page for which you wish top positioning;
iii. Are not obtained all at once, but constantly and at a steady pace;
iv. Are on a good anchor text (“potatoes”, instead of “click here”);
v. Come from various sources (not just a type of web sites, but as many as possible);
vi. Come from trustworthy web sites, which in turn have obtained links especially from trustworthy web sites;
vii. Are obtained not just from other web sites, although these are important also, but should come from the interior, other pages of your web site;
– The site should have a clear, simple and easy to understand architecture; someone that enters a web site for the first time should easily understand what it is about and can navigate without any problems on the web site;
– Give links to other web sites, with no fear whatsoever; you shouldn’t be just a destination for the link, just be a source; you should decide if you want to give links to direct competitors;
– The server which hosts your web site:
a. Should be localized in specific country, if you target that specific country; also, it should have a bandwidth as possible; the reason is simple – the faster a web site will load, the better will it be seen by the search engines;
b. A web site it’s best that is functioning as long as possible – it looks natural, but you should make sure that you have a web site that has a period of functioning (uptime) in a year as high as possible;
– A site from a specific country should have a top-level domain (TLD) for that specific country:
.com for international web sites and sites based in the US;
.co.uk for sites in Great Britain;
.eu for sites in the European Union;

b. Bad practices
Below you will find a list of bad practices in handling a web site:
– You can have content dedicated for the search engines and a different content for the visitors; in other words, the search engines may have the impression you have useful articles on cucumbers, while, instead, for the users you have pages with illegal traffic of cucumbers; don’t do this;
– Avoid obtaining links from sources of links with poor quality, they can adversely affect you;
– Also, don’t give links to web sites with a bad reputation;
– Don’t write on the page texts with colors which looks similar to the background, to be seen by the search engines, and the usual visitors can have problem reading them; avoid this practice;
– Don’t fill a page with the same and same keywords; avoid repetition without a purpose;
– Don’t buy links;
– Don’t put excessively lot of links to other web sites in the footer, it can be a SPAM signal;

(top)

C. Conclusions – important things + final conclusions

1. You occupy top positions for searches related to your brand, you event have good results for the searching for top keywords; I’d look into the “camasa” issue;

2. The option without www. doesn’t redirect, as it should, to the option with www.; very big problem!

3. Yes, I’m correct, all the pages have the same description, even the homepage doesn’t have a unique description; it’s also a bit too short;

4. As a problem: you have relatively few links and of not great quality; I can say that some of the links that you do have are related to your business;

Notă finală: dacă veți dori pe viitor să realizați un site web și să păstreze anumite criterii SEO, vedeți și:
The Web Developer’s SEO Cheat Sheet.

_________
Bottom line: the web site camasi.com.ro is a web site with very good search engine optimization.

Share on WhatsAppLinks giving error?