Few days back we reported that Dell has announced Project Sputnik,
a six month research effort to explore the possibility of creating an
open source Ubuntu laptop targeted directly at developers.
Dell has now announced that the experiment was successful and they were surprised by the amount of interest it has generated.
“Since we announced project
Sputnik a little over two months ago, we have continued to be amazed by
the amount and quality of interest and input we have received,” said Barton George, project Sputnik lead and director Dell’s web vertical marketing.
As the project has matured, Dell is now taking Project Sputnik from pilot to product this fall.
It plans to deliver an official developer laptop based on the Dell XPS
13 with Ubuntu 12.04LTS preloaded, available in select geographies.
Made possible by an internal
innovation fund, the system will offer developers a complete
client-to-cloud solution. The included software will allow developers to
create “microclouds” on their laptops, simulating a proper, at-scale
environment, and then deploy that environment seamlessly to the cloud.
The solution is based on the current high-end configuration of the Dell
XPS13 laptop: 13.3 inch screen with edge to edge glass (1366x768
resolution), i7 2GHz Intel Core2 Duo processor, 4GB of RAM, 256 GB SSD,
0.88 x 12.56 x 9.3 dimensions and 2.99 lbs weight. Check out XPS13
product page here.
The PDF file format was originally created by Adobe in the early ’90s
and there are now over 700+ million PDF documents on the Internet
according to Google (search for filetype:pdf).
There are several
reasons why the PDF file format is so popular for exchanging all sorts
of documents including presentations, CAD Drawings, invoices and even
legal forms.
PDF files are generally more compact (smaller in size) than the source document and they preserve the original formatting.
Unlike
Word and other popular document formats, the content of a PDF file
cannot be modified easily. You can also prevent other users from
printing or copying text from PDF documents.
You can open a PDF
file on any computer or mobile device with free software like Adobe
Acrobat Reader. Google Chrome can read PDFs without requiring plugins
and it can create PDFs.
Edit PDF Files using Free Alternatives to Adobe Acrobat
While
PDF Files are “read only” by default, there are ways by which you can
edit certain elements* of a PDF document for free without requiring the
source files or any of the commercial PDF editing tools like Adobe
Acrobat.
We will primarily focus on tools that let you alter the actual contents of a PDF file
An Online PDF Editor for Basic Tasks
Sometimes you need to
make minor changes to a PDF file. For instance, you may want to hide
your personal phone number from a PDF file before uploading it online or
may want to annotate a page with notes and freehand drawings.
You can perform such edits in a PDF easily with PDFEscape.com, an online PDF editor that is free and also lets you edit password-protected PDF documents in the browser.
With
PDF Escape, you can hide* parts of a PDF file using the whiteout tool
or add annotations with the help of custom shapes, arrows, text boxes
and sticky notes. You can add hyperlinks to other PDF pages / web
documents.
[*] Hiding is different from redaction because here we
aren’t changing the associated metadata of a PDF file but just hiding
certain visible parts of a PDF file by pasting an opaque rectangle over
that region so that the stuff beneath the rectangle stays invisible.
Change Metadata of PDF Files
If you would like to edit the meta-data associated* with a PDF document, check out Becy PDFMetaEdit.
This is a free utility that can help you edit properties of a PDF
document including the title, author name, creation data, keywords, etc.
The tool can also be used for removing PDF passwords as
well as for encrypting PDF documents such that only users who know the
password can read the contents of your PDF files. And since this PDF
metadata plus bookmarks editor can be executed from the command line,
you can use it to update information in multiple PDF files in a batch.
[*] If you planning to post your PDF files on the web, you should
consider adding proper metadata to all the files as that will help
improve the organic rankings of your PDF files in Google search results.
Edit the Text of a PDF File
If
you want to edit the text in a PDF file but don’t have access to the
source documents, your best bet is that you convert the PDF file into an
editable Word document or an Excel spreadsheet depending on the
contents of the PDF.
Then edit these converted PDFs in Microsoft
Office (or Google Docs) and export the modified files back into PDF
format using any PDF writer.
If your PDF document is mostly text, you may use the desktop version of Stanza
to convert that PDF into a Word document. If the document includes
images, charts, tables and other complex formatting, try the online PDF
to Word converter from BCL Research or the one from NitroPDF – the former offers instant conversion while the latter service can take up to a day though its yields more accurate results.
Advanced PDF Editing (Images, text, etc.)
Now
that you know the basic PDF editing tools, let’s look at another set of
PDF editors that are also free but can help you do some more advanced
editing like replacing images on a PDF file, adding signatures, removing
blocks of text without breaking the flow of the document, etc.
First there’s PDF XChange,
a free PDF viewer and editor that you also may use for typing text
directly on any PDF page. PDF XChange also supports image stamps so you
may use the tool for signing PDF files or for inserting images anywhere on a PDF page.
Then you have Inkscape, a free vector drawing tool (like Adobe Illustrator) that can natively import and export PDF content.
Video: How to Edit PDF Files with Inkscape
With Inkscape, you can select any object on a PDF page (including
text, graphics, tables, etc.) and move them to a different location or
even remove them permanently from the PDF file. You can also annotate
PDF files with Inkscape or draw freehand on a page using the built-in
pencil tool.
The next tool in the category of advanced PDF editors is OpenOffice Draw with the PDFImport extension.
OpenOffice Draw supports inline editing so you can easily fix typos in a
PDF document or make formatting related changes like replacing color,
increasing or decreasing the text size, replacing the default
font-family, etc.
Like Inkscape, the OpenOffice toolbox also
includes support for annotations, shapes, images, tables, charts, etc.
but here you have more choices and the software also looks less complex.
The OpenOffice suite is a little bulky (they don’t provide a standalone
installer for Draw) but if you have the bandwidth, OpenOffice is the best tool for manipulating PDF documents when you don’t have the budget for Adobe Acrobat.
When you send an email to someone, the message goes to an SMTP server
which then looks for the MX (Mail Exchange) records of the email
recipient’s domain.
For instance, when you send an email to hello@gmail.com,
the mail server will try to find the MX records for the gmail.com
domain. If the records exist, the next step would be to determine
whether that email username (hello in our example) is present or not.
Using a similar logic, we can verify an email address from the computer without actually sending a test message. Here’s how:
Let say that we want to verify if the address billgates@gmail.com exists or not?
Step 1. Enable telnet in Windows. Or if you already have the PuTTY utility, skip this step.
Step 2. Open the command prompt and type the following command:
nslookup – type=mx gmail.com
This
command will extract and list the MX records of a domain as shown
below. Replace gmail.com with the domain of the email address that you
are trying to verify.
Step 3.
As you may have noticed, it is not uncommon to have multiple MX records
for a domain. Pick any one of the servers mentioned in the MX records,
may be the one with the lowest preference level number (in our example,
gmail-smtp-in.l.google.com), and “pretend” to send a test message to
that server from you computer.
For that, go to command prompt window and type the following commands in the listed sequence:
3a: Connect to the mail server:
telnet gmail-smtp-in.l.google.com 25
3b: Say hello to the other server
HELO
3c: Identify yourself with some fictitious email address
mail from:<labnol@labnol.org>
3d: Type the recipient’s email address that you are trying to verify:
rcpt to:<billgates@gmail.com>
The
server response for ‘rcpt to’ command will give you an idea whether an
email address is valid or not. You’ll get an “OK” if the address exists
else a 550 error like:
abc@gmail.com – The email account that you tried to reach does not exist.
support@gmail.com – The email account that you tried to reach is disabled.
That’s it! If the address is valid, you may perform reverse email search to find the person behind the address. And if you get stuck somewhere, this step-by-step video should help:
You get an email from a person with whom you have never interacted
before and therefore, before you reply to that message, you would like
to know something more about him or her. How do you do this without
directly asking the other person?
Web search engines are obviously
the most popular place for performing reverse email lookups but if the
person you’re trying to research doesn’t have a website or has never
interacted with his email address on public forums before, Google will
probably be of little help.
No worries, here are few tips and online services that may still help you uncover the identity of that unknown email sender.
#1. Find the sender’s location
Open the header of the email message and look for lines that say “Received: from” followed by an IP address in square brackets. If there are multiple entries, use the IP address mentioned in the last entry.
Now paste the IP address in this trace route tool and you should get a fairly good idea about the location of the email sender
#2. Reverse email search with Facebook
Facebook has 450 million users worldwide and there’s a high probability that the sender may also have a profile on Facebook.
Unlike
LinkedIn and most other social networks, Facebook lets you search users
by email address so that should make your job simpler. Just paste the
email address of the sender into the Facebook search box and you’ll immediately know if a matching profile exists in the network.
If
you are able to locate that person on Facebook, download his profile
picture and then upload it to Google Images (click the camera icon in
the search box). This acts as a reverse image search engine so you can locate his other social profiles where he may have used the same picture.
#3. Check all the other Social Networks
You can use a service like Knowem to quickly determine if a profile with a particular username exists in any of the social networks.
If
the email address of the send is something like green_peas@hotmail.com,
there’s a probably that he or she may have created accounts of some
other social network using the same alias “green_peas” – put that in knowem.com to confirm.
#4. People Search
Finally, if nothing works, you should try a people search service like Pipl and Spokeo – both services let you perform reverse email lookups but Spokeo has a more comprehensive database than Pipl.
Other
than regular web documents, Spoke also scans social networks and even
the whois information of domain names to find any bit of information
associated with an email address. However, some of the results returned
by Spokeo are only available to subscribers.
Before leaping to the Ubuntu site to download the freshest bytes and
bits, you may want to wait for a bit. Canonical tells me that the site
is currently getting overwhelmed and some people are not being able to
get into it. For me, the site and download links worked, but at speeds
of about 100Kbps, they certainly aren’t fast.
You should use whichever region is closest to you, you can either manually add these to
/etc/apt/sources.list/
file or paste them into the custom URL field of the software sources application.
East Coast US:
deb http://us-east-1.ec2.archive.ubuntu.com.s3.amazonaws.com/ubuntu/ precise main restricted universe multiverse
West Coast US (California):
deb http://us-west-1.ec2.archive.ubuntu.com.s3.amazonaws.com/ubuntu/ precise main restricted universe multiverse
West Coast US (Oregon)
deb http://us-west-1.ec2.archive.ubuntu.com.s3.amazonaws.com/ubuntu/ precise main restricted universe multiverse
South America (São Paulo, Brazil)
deb http://sa-east-1.ec2.archive.ubuntu.com.s3.amazonaws.com/ubuntu/ precise main restricted universe multiverse
Western Europe (Dublin, Ireland)
deb http://eu-west-1.ec2.archive.ubuntu.com.s3.amazonaws.com/ubuntu/ precise main restricted universe multiverse
SouthEast Asia (Singapore)
deb http://ap-southeast-1.ec2.archive.ubuntu.com.s3.amazonaws.com/ubuntu/ precise main restricted universe multiverse
NorthEast Asia (Tokyo)
deb http://ap-northeast-1.ec2.archive.ubuntu.com.s3.amazonaws.com/ubuntu/ precise main restricted universe multiverse
I've always gone with the 'select best server' GUI tool: from , go to Edit -> Software Sources in the menu. (You
can also do this from the Preferences for Synaptic or the Update
Manager.)
Under the Ubuntu Software tab there's a drop-down next to "Download
from:" If you select "Other..." you'll get a button that says "Select
Best Server"; clicking on it gets Ubuntu to run some tests to see what
mirror will give the best download speed.
I can't comment on whether this is better or worse than the method
you found for yourself. Perhaps someone with some expertise on the
issue can comment!
Here are some screenshots for the graphical method:
Internet search engines provide end-users a quick and easy to use way to get access to information available on the Internet. As more and more Internet content is multimedia, you need to ensure that your content is properly indexed by the search engines so that users can discover it. This article will provide an overview of how to enable your video content to be SEO enabled and indexed by the various search engines (Google, Yahoo, Microsoft).
There is a standard document, called a sitemap, that search engine indexers look for when examining your site. This document concisely tells the search engines what content is exposed on your site, the metadata for that content, and where that content is located on your site. A sitemap is an XML file that follows a standard specification.
There are two different flavors of sitemaps that you can (and should) create:
Sitemap – A sitemap that will index your content in the standard text based search engines such as www.google.com or search.yahoo.com
Video Sitemap – A sitemap that will index your content in media-centric search engines such as video.google.com
Note that both of these sitemaps index the metadata about your video content and provide links to end-users. The difference is where the metadata that is indexed is obtained from and how the content is surfaced in search results.
A proper SEO strategy for your video content will include creating both a standard sitemaps as well as a video sitemap. From a priority perspective, you want to create a standard Sitemap first and then a Video Sitemap.
Standard Sitemaps
Sitemaps follow the sitemap specification that is defined here: www.sitemaps.org. The purpose of the sitemap file is to provide a list of URLs on your site to the search engines. The only other information associated with a URL is when the page was last modified and how frequently the page changes. Note, there is no metadata about your content in this index. Because the sitemap is page-centric, we need to create a model where each video in your library will have a unique page, or URL, associated with it. This can be accomplished by having a single page whose behavior and content can be dynamically changed by passing in different query parameters to the page. For example, if you have a URL like http://www.example.com/video.html?videoId=123, you would have the video.html page look for the videoId query parameter (videoId=123) and modify the contents of the page returned to the browser to contain information about the video with id 123. This would be done on the server-side of your application where the page would look for the ID and then use the Video Cloud Media APIs to fetch metadata about the video and write it into the page.
For the purposes of this article, we assume that there is a single landing page on your site which can be used to play back all video content for your site. Different query parameters will be passed to the page to tell the page what video to play back and what video to surface metadata for. For example, let's say you have a page that displays the contents of an entire playlist and queues up a specific video in a player. You tell the page what playlist to display metadata for via a bclid query parameter and what video to surface in a player via the bctid parameter. Thus, what we want to do is create a URL for every unique playlist and video id combination.
Here is an example of sitemap that will be created:
A video sitemap is similar in concept to the standard sitemap file; there will be an entry in this sitemap file per video in your account. In fact, a video sitemap uses the sitemap schema as its base and adds additional tags specific to video metadata.
We won't go through the meaning of every element in the video sitemap file, but we do want to touch upon two elements: <video:content_loc> and <video:player_loc>. According to the video sitemap specification:
You must provide one of either the <video:content_loc> or <video:player_loc>
The video:content_loc element would be used to provide a reference to your video file (FLV or MP4) directly. We don't want to do this for several reasons:
If you use FMS for streaming, the video sitemap specification does not allow you to reference these files directly; you can only use HTTP.
If Google surfaces your content directly in the results page, you want to make sure it is played back through your player to keep all of your analytics, advertising, and branding with your content.
For these reasons, we want to use the video:player_loc element rather than video:content_loc. This will point to a Single Video Player from the Video Cloud system via the Player URL publishing code. We can pass different video IDs to this player to play, using the bctid query parameter. We don't restrict this player to a particular domain and thus will allow it to be directly embedded in the search results.
Here is an example of the video sitemap that will be created:
A complete SEO strategy for your video content should include creating both a standard sitemap as well as a video sitemap. This will ensure that your content is indexed across the widest swath of search engines. Additionally, your content will be surfaced in the most aesthetically pleasing manner for the point of discovery. This article has outlined an approach for providing a unique URL per video and samples for generating the sitemap files that the engines will index.
As the rising importance of local search optimisation, hundreds of millions of people now own a smartphone and this number is expected to rise to one billion by 2013. With this in mind, there still seems to be a relatively untapped market for mobile search optimisation within the industry of internet marketing.
One of the reasons that this particular market is not being fully exploited is that it is not still fully understood by many search engine optimisation professionals. In fact, a key mistake that SEO’s often make is that search engine results appear the same on a computer and on a mobile device. This is not the case, and with the users of smartphones becoming increasingly savvy to their capabilities, perhaps it’s time that more businesses and SEO agencies tap into this potentially huge market.
A crucial example of how mobile search differs to desktop search is that it is inherently local and therefore Local listings feature more prominently. This could be a great area for an emerging local business to exploit. One way a company could do this is by ensuring that they are optimised for Google Places listings, and there is a possibility to combine this with a domain that focuses on geo-specific keywords. This may seem to argue against the Desktop SEO logic that it is better to consolidate all links in one canonical domain, however, creating an alternate domain focused intensely on local search could provide more traffic, both physically and digitally. Determining the best approach to this dilemma requires a keen marketing mind to determine which option has the potential to bring your business the most success.
Another opportunity that mobile search creates is to search without using a ‘search term’. The best example of this is the Google Goggles app that allows the user to take a picture of something and that will become the subject of the search. The app works for books, films, landmarks, businesses and logos. A company optimising for mobile search would ensure that there logo is clearly displayed and optimised, and attached to a mobile site allowing users to go straight to it after finding the company logo. In a broader context, not necessarily logo’s, this could have a huge impact. Pretty much everyone that has a smartphone has a built in camera, so if they take a picture of a product that relates to your company, it makes business and practical sense to ensure you have a mobile site optimised for those images.
Some other differences between mobile and desktop search include the fact that the vertical listings will often differ in mobile search, with images featuring higher in the SERPs. Smartphone results also produce different filters at the top of the SERPs; fewer filters can mean less distraction for the user and potentially a higher click through rate to your site.
There are many more differences between mobile search and desktop search, but I selected these few to illustrate the point that there is incredible potential within this market. With relatively few companies targeting this specific sector of digital marketing, there is a real opportunity for business owners willing to grasp it.
Videos are some of the hottest content on the net. Everyone receives those periodic emails from a friend or business associate advising, ‘Check this out!’ Videos are the main form for viral content, add interest when embedded in a site, and draw traffic like a magnet.
Videos are easy to make, too. Anyone with any kind of camera can create one, and anyone with a computer can upload one. Popular videos don’t have to be high-quality, have good production values or even be well-edited to succeed. Skilled children can make them.
Why, then, isn’t everyone rushing to include video content on their site? There’s a reason SEO experts don’t push video content. Videos, like images, present problems when it comes to search engine crawlers. The search engines can’t see video content. This means that all the effort you put into getting a video onto your site is wasted, from a search engine optimisation point of view.
This doesn’t mean that you should avoid video content. Videos have proven attraction for internet users, and as time goes by they are being used increasingly as part of internet marketing strategies. The value of video content as an attention magnet can’t be ignored. The answer is to work around your video content to ensure that everything is done that can be done to explain its content to a search engine.
Optimise your videos
The search engines rely on extraneous information to determine what your video is all about. Much like they do when ranking your site, the search engines examine the on-page factors around the video, the links to the video and links from the page the video is on. On the whole, search engines are unable to view the content of the video, but it is thought they can detect some on-screen text. This makes it worth optimising any subtitles and titles.
Many people forget that the video file can be optimised. Use your keywords in the title and, if the platform your video is on allows it, map out an SEO-friendly description and tags. If you host the video on a site like YouTube, don’t forget to re-enter all of the meta data.
Format will also affect your video’s search engine friendliness. As time goes by, Google is developing techniques to work with Flash content. This may mean Flash videos will gain some SEO standing in the future.
How to maximise SEO techniques around videos
The content you put around your video is vital. Writing a brief description of the video will help users and search engines identify the content. Ensure you use the word ‘video’ as part of your keywords for that page. The very fact that you have a video can prove part of the attraction. Finally, include a link to a transcript of the video. This will ensure the content is used to maximum SEO effect.
Here is a quick comparison of the new InnoDB plugin performance between different compression, row formats that is introduced recently.
The table is a pretty simple one:
CREATE TABLE `sbtest` (
`id` int(10) unsigned NOT NULL,
`k` int(10) unsigned NOT NULL DEFAULT '0',
`c` char(120) NOT NULL DEFAULT '',
`pad` char(60) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k` (`k`)
) ENGINE=InnoDB ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8;
The table is populated with 10M rows with average row length being 224 bytes. The tests are performed for Compact, Dynamic and Compressed (8K and 4K) row formats using MySQL-5.1.24 with InnoDB plugin-1.0.0-5.1 on Dell PE2950 1x Xeon quad core with 16G RAM, RAID-10 with RHEL-4 64-bit.
Here are the four test scenarios:
1. No compression, ROW_FORMAT=Compact
2. ROW_FORMAT=Compressed with KEY_BLOCK_SIZE=8
3. ROW_FORMAT=Compressed with KEY_BLOCK_SIZE=4
4. ROW_FORMAT=Dynamic
All the above tests are repeated with innodb_buffer_pool_size=6G and 512M to make sure one fits everything in memory and another one overflows. The rest of the InnoDB settings are all default except that innodb_thread_concurrency=32.
Here is the summary of the test results:
Table Load:
Load time from a dump of SQL script having 10M rows (not batched)
Compact Compressed (8K) Compressed (4K) Dynamic
28m 18s 29m 46s 36m 43s 27m 55s
File Sizes:
Here is the size of the .ibd file after each data load
Compact Compressed (8K) Compressed (4K) Dynamic
2.3G 1.2G 592M 2.3G
Data and Index Size from Table Status:
Here is the Data and Index size in bytes from SHOW TABLE STATUS and you can see the original data size here rather than the compressed size
Compact Compressed (8K) Compressed (4K) Dynamic
Data 2247098368 2247098368 2249195520 2247098368
Index 137019392 137035776 160301056 137019392
Compression Stats:
Here is the compression stats after the table is populated from information_schema.InnoDB_cmp; and you notice that 4K takes more operations and time for both compression and un-compression
Page_size Compress_ops Compress_ops_ok Compress_time Uncompress_ops Uncompress_time
8K 8192 446198 445598 73 300 0
4K 4096 1091421 1012917 463 38801 13
Performance:
Here is the performance of various row formats with threads ranging from 1-512 for both 512M and 6G buffer pool size for both concurrent reads and writes.
compress512m
compress6g
Observations:
Few key observations from the performance tests that I performed without looking to any of the sources, as I could be wrong, someone can correct me here. Its hard to draw from these input scenarios, but helps to estimate what is what.
* The load time is almost same except that the 4K compression seems to take longer than the rest; and compression in general is hitting the INSERT/Load performance a little bit.
* Compact or Dynamic, there is no compression; so the data and index file sizes will be almost same
* The SHOW TABLE STATUS for compressed table will have its original Data_Length and Index_Length statistics rather than the compressed statistics (may be a bug or InnoDB needs to extend SHOW TABLE STATUS to show any compressed sizes or other means, right now only option is to view your files manually)
* 8K compression reduced the .ibd file by nearly 50% (1.2G out of 2.3G) and 4K compression reduced the size by 1/4th (592M out of 2.3G); and it could vary based on table types and data.
* 8K compression takes less ops and time for both compression and de-compression when compared to 4K (obvious)
* When there is enough Innodb buffer pool size to act data in memory, the compression is a bit overhead, but you will be saving space
* When there is a overflow from buffer pool (IO bound), compression seems to really help
* 4K compression in general seems to be slower when compared with 8K or any other row_format.