Sunday, July 27, 2008
at
8:14 PM
Posted by
Suresh Kumar A
A research paper BrowseRank: Letting Web Users Vote for Page Importance delivered at a conference in Singapore this week, highlights Microsoft Asia Research's alternative to Google's PageRank algorithm, BrowserRank - "The more visits of the page made by the users and the longer time periods spent by the users on the page, the more likely the page is important. We can leverage hundreds of millions of users' implicit voting on page importance,".
The new process, in theory, ranks sites based on their usage, and user behavior patterns. Google's algorithmic stew for rankings remains a great mystery, and an ever changing set of goalposts that are constantly gamed by companies looking to leverage search results to drive traffic, and drive revenues. Microsoft sees Google's strength in this regard as being its weakness, too, arguing that web developers have many opportunities to influence the ranking system, unfairly. BrowserRank, on the other hand, would actually try and take a look at user behavior on a site, Microsoft arguing that the more people are engaged by a site, the more likely it is that it has relevance.
Search is of tremendous importance to the Internet for many reasons. For one thing, search engines are highly influential middlemen that steer users to Web sites they may not be able to find on their own. For another, queries typed into search engines can be powerful -- and in Google's case highly profitable -- indications of what type of advertisement to place next to the search results.
But Microsoft lags leader Google and No. 2 Yahoo in search. It's trying hard to catch up, for example with unsuccessful proposals to acquire Yahoo or its search business that would cost the company billions of dollars. And Microsoft just bought search start-up Powerset.
Google isn't putting all its eggs in the PageRank basket, though.
"It's important to keep in mind that PageRank is just one of more than 200 signals we use to determine the ranking of a Web site," the company said in a statement. "Search remains at the core of everything Google does, and we are always working to improve it."
Thursday, July 24, 2008
at
6:13 AM
Posted by
Suresh Kumar A
V-Enable, a voice-enabled mobile 411 system, conducted a study by taking a random sampling of 20,000 searches in major metropolitan areas from customers of several V-Enable partner carriers including Alltel and MetroPCS. The findings clearly represent interesting trends caused by the recession. For one thing, people are eating more pizza! The results for the top restaurant searches for the period between October 2007 and June 2008 are:
1. Pizza Hut
2. McDonald’s
3. Domino’s Pizza
4. Starbucks
5. Papa John’s Pizza
6. Little Caesars Pizza
7. Taco Bell
8. Burger King
9. Wendy’s
10. Denny’s
Sit-down restaurants like Olive Garden, Applebee’s and Red Lobster, have dropped off the list, while recession-proof comfort food like Pizza Hut and Domino’s shoot to the top of the list. 380% more searches for Pizza Hut have been conducted during the period, and searches for Domino’s Pizza have increased 980%. High gas prices are keeping people at home ordering in, and they are opting for cheaper alternatives. Financial analysts have explored this area extensively, and have deemed several of these restaurant chains "recession-proof stocks."
There are several other search-related economic indicators from V-Enable. U-Haul, a company that was never on any top 50 list, jumped to #23 in general search, possibly because of a rise in foreclosures. Macy’s dropped from #17 to #49 in retail, a direct correlation to the fact that people just don’t have the discretionary income that they used to. Motel 6 has never showed up on a top 50 list, but they are now #37 in general search, quite possibly because travelers can’t afford the costly alternatives. Mobile search happens in real-time and is unaffected by SEO, making these statistics arguably more reflective of consumer sentiment than web search.
V-Enable is a mobile information system, where users can speak the name of a restaurant or residential listing and receive location and contact information. The company also has live operators working behind the scenes so that users can call and get human assistance, if necessary. V-Enable sent us similar retail statistics in December. The company is backed by $10.1 million over 3 rounds from Siemens Mobile Acceleration Corporation, Sorrento Ventures, SoftBank Capital and Palisades Ventures.
credit :
techcrunch
Monday, July 21, 2008
at
5:30 PM
Posted by
Suresh Kumar A
1. Google can be your phone book. Type the person's name, city and state directly into the search box, Google will deliver phone and address at the top of the results. This feature works for business listing too.
Google can also work as the reverse directory; if you have phone number, type in the search box, and google will deliver the results that matches the phone number.
2. Google can be your calculator. Type a math problem into the search box and google will compute and display the result. You can spell out the equation in words (one plus one, six divided by two), use numbers and symbols (4*5,6/2), or type in a combination of both (ten million *pi, 15% of six).
3. Longer queries is better, but shorter is okay. Google is designed to deliver high quality result even if you are searching for one or two word queries, so its better to keep your search short. But adding a few more words will yield better result.
For Example : When you are searching information on applying to colleges, include the word admissions after the name of the university you are searching to get more relevent results.
4. Use quotation marks when precision matters. Typing "the truth of linux" into the google search box will yield Web pages about the linux truth - but leaving off the quotes will produce an assortment of unrelated pages.
The reason behind this : adding quote marks around the search query tells google to look for occurrences of the exact phrase as it was typed. That makes quote marks especially helpful when searching for song lyrics, people's name or expressions such as "to be or not to be" that include very common words.
5. Google can be your dictionary. Type define followed by any English word into the search box, and google will deliver a quick definition at the top of the result.
6. Forgot plural. Google automatically search for all the stems of a word , so you don't need to do separate searches for sleep, sleeps, sleeping. Just type one of the words into the search box, and google will take care of the rest, giving you results all in one list.
Next Post : Google Search Tips - part2
Sunday, July 20, 2008
at
6:00 PM
Posted by
Suresh Kumar A
Big Table is an ongoing research project to create a structured database that will operate in a distributed environment. It is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers.
It is like a spreadsheet with hugh limitless rows and columns, each rows will be identified by key. It is different from traditional Hierarchy Database where it will have lot of tables and each table will be linked by keys. BigTable will have only one table with limitless rows and columns. We can perform joins, sub selects and other queries in bigtable similar to Traditional hierarchial tables, but in a different way.
Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance.Bigtable has successfully provided a flexible, high-performance solution for all of these Google products.
Saturday, July 19, 2008
at
9:17 AM
Posted by
Suresh Kumar A
1. Christianity has Jesus as Lord and Savior. Linux has Linus Torvalds.
2. Jesus Christ was followed by the disciples, while Linux Torvalds was assisted by the original programmers.
3. Christianity has different Denominations. Linux has Distributions.
4. Accepting Jesus as your savior saves you from the fires of hell and demons, while installing Linux on your computer will protect you from the hell of data loss, viruses, and malware.
5. Christianity views Satan as the being of ultimate evil, while Linux perceives Bill Gates as the Evil One.
6. Linux is defined by its source code, while Christianity follows the Bible.
7. Christianity offers salvation for no cost at all other than the occasional tithe and charity donation. Linux is also free with only the occasional shipping charge or support fee.
8. Religious folks are instructed to go out into the world and teach others the good news. Linux users also do their part in converting those that have been drawn in to the evil darkness of Microsoft.
Credit :
divinecaroline.com
Friday, July 18, 2008
at
3:35 AM
Posted by
Suresh Kumar A
With the Adobe® AIR™ runtime, you can deliver branded rich Internet applications (RIAs) on the desktop that give you a closer connection to your customer.
Adobe AIR uses the same proven, cost-effective technologies used to build web applications, so development and deployment is rapid and low risk. You can use your existing web development resources to create engaging, branded applications that run on all major desktop operating systems.
The benefits are extensive. By using Adobe AIR as part of your RIA strategy, you can boost productivity, extend your market reach, enhance customer satisfaction, improve customer retention, lower costs, and increase profits.
Business Benefits
Companies like eBay, AOL, and NASDAQ are already using Adobe AIR to deliver engaging RIAs to their users' desktops. With Adobe AIR, you can:
- Establish a more persistent connection with existing customers.
- Deliver fully branded experiences with desktop functionality.
- Leverage existing personnel, processes, and infrastructure.
- Develop and deliver RIAs efficiently using proven Adobe technology.
- Increase the ROI of your web investments.
If you have any AIR Usage Benefits, please share with us through comment.
Thursday, July 17, 2008
at
8:30 PM
Posted by
Suresh Kumar A
Phishing Attack
Phishing is a type of attack wherein the attacker impersonates a valid site and steals sensitive information entered by the customer on the fake site.
The attacker sends the victim a forged e-mail having the link of a fake page. The fake page looks exactly like a valid page of the original site. These e-mails have upsetting or exciting (but false) statements to get the customer to react immediately. When the customer clicks the link, he is asked to provide his credentials to login and update his personal information. This reveals important information to the attackers.
Steps to prevent these Attack
The best way to prevent phishing attacks is by creating
customer awareness. Some important points that need to be communicated to the customers includes:
1. Organizations should constantly remind their customers that they will never request for sensitive information via e-mails. Moreover all email communications should address the customer by first and last name.
2. Customers need to be educated not to click on URL of critical website (e.g. Internet banking website) that comes via email but visit these websites by directly typing the address in the browser.
3. Customers should be educated on identifying secure websites, like https in URL or ‘Lock’ icon, before submitting username, password, credit card number and other sensitive information.
4. Customers should be educated about choosing strong passwords and the importance of changing them regularly.
How to choose a Strong Password5. Customers should be educated to be suspicious of any e-mail with urgent request for personal information.
6. Customers should be provided with easy methods to report phishing incidents.
if you have any other steps to prevent phishing attacks, please share with us by comments.
Wednesday, July 16, 2008
at
7:30 PM
Posted by
Suresh Kumar A
The latest shiny new iPhone with its super fast 3G web browsing is here and selling out fast – but that may in part be due to low initial stocks in the stores as well as consumer demand. The launch was not smooth with problems in the
USA with AT&T activation server crashes and with
O2 in the UK causing
Gizmodo to call the event “iPocalypse”. But, dispite the problems,
Apple are already claiming to have
sold 1 million iPhones over Friday, Saturday and Sunday – higher than expected and over 7 days less than the first release – but then they had 28 operators in 22 countries this time.
Time will tell if this new model, with its new features and lower prices will deliver the success that Apple initially planned. Like many other people this weekend, I found myself testing out the latest iPhone features in the local store… The prospect of super-fast browsing on a large mult-touch screen and the promise of upcoming
TomTom navigation using the new GPS capabilities is very tempting – probably a good job they had none in stock.
So is that it for Apple? Do they have the perfect phone? Well, of course not – there are plenty of niggles and many things they should be looking at for their seemingly annual phone release, especially with new versions of Microsoft Mobile and the new Google Android looming on the horizon. The new
App Store is a great start and will give customers a steady fix of new iPhone goodness until the next big release. It opens the opportunities for third parties to start filling the gaps by adding more value (hurry up TomTom).
Last year I was predicting that Santa Jobs would deliver GPS and 3G, but like many others I was also asking for a decent camera. This seems to be the most obvious omission from the new device, especially since we are seeing 5 megapixels fast becoming the standard. We are even seeing a few 8 megapixel super-shooters and even entry level phones are coming with 3.2 megapixels these days. This makes the 2 measlypixel Apple a clear year out of date right from the start. I wouldn’t mind the low pixel count if the quality was good, but so far many sites are reporting low quality too compared with other mass market handsets. So
prediction number 1 would be that 5 megapixel camera with auto-focus and flash – perhaps with face recognition for exposure and focus.
Read more>>
Tuesday, July 15, 2008
at
7:35 PM
Posted by
Suresh Kumar A
After finally getting my iPhone 3G ,I spent most of my time playing with the Apps and App Store. It’s early days but there already some amazing Apps available for the iPhone. This list features top 5 reasons to love or live with iphone 3G.
1. Looks (and feels) amazingWhatever you think of the feature set, and the chances of your fat fingers destroying that shiny big screen, the iPhone looks amazing, and it probably feels amazing in the palm of your hand, too. It's sleek, curvy, bright and shiny , slim and sexy, with on-screen icons and buttons that just ooze and drip class.
2. Touch screenThat touchscreen is a mixed blessing. Fingers are messy things (some people's more than others) and sweeping and tapping them all over a glossy screen is sure to take its toll eventually. Plus, fingers are much fatter than a stylus, so you can't help but obscure what you're interacting with. You'll also have to invest in your own screen protection, as first-generation iPhone doesn't have its own.
3. iPod With its beautiful 3.5-inch widescreen display and Multi-Touch controls, iPhone is also one amazing iPod. Browse your music in Cover Flow and watch widescreen video with the touch of a finger.
Scroll through songs, artists, albums, and playlists with a flick. Browse your music library by album artwork using Cover Flow. Even view song lyrics that you’ve added to your library in iTunes. Get a call while listening to music? A pinch of the microphone on your iPhone headset pauses the tune and answers the call.
4. Wi-fiOf course wi-fi isn't everywhere, but it's increasingly easy to hook up to the Internet at home and work, as well as plenty of other places: cafés, libraries, and urban hotspots. iPhone certainly isn't the only handset to include it, but it's a great addition.
5. Third party Applicationsiphone 3G provides lot of third party apps to users.here are the most favorite third party apps.
- FileMagnet
- Shazam
- Vicinity
- Things
- OmniFocus
- Twitterific
- NetNewsWire
- Remote
- Super Monkeyball
- Cube Runner
please share your comments for 'why you love iphone 3G?'.
Monday, July 14, 2008
at
6:49 PM
Posted by
Suresh Kumar A
At the
Next Web conference in Amsterdam over the weekend, Tapan Bhat, the Yahoo! vice president of Front Doors, told attendees that search would not dominate the web in the future. "The future of the web is about personalization. Where search was dominant, now the web is about 'me.' It's about weaving the web together in a way that is smart and personalized for the user," he said.
Interestingly, Google appears to have similar ideas. A couple of weeks ago, Google's CEO Eric Schmidt told the
Financial Times that personalization was a key area of research for Google. "We are very early in the total information we have within Google. The algorithms will get better and we will get better at personalization," he said. "The goal is to enable Google users to be able to ask the question such as ‘What shall I do tomorrow?’ and ‘What job shall I take?’"
Both Google and Yahoo! are hoping to take data about user behavior aggregated from across their properties (think: search history, del.icio.us bookmarks, Flickr photos, Upcoming events, Answers questions, etc.) in order to learn more about what each user wants. The ultimate goal is to deliver a more personalized experience to the user.
Privacy fears aside, if Google and Yahoo! are right, and personalization is where the web is headed, then Google might be more vulnerable than anyone thinks. According to Compete, the stickiest site on the web -- the one that demands most of our attention -- is MySpace, followed by Yahoo! and eBay. Google is actually 5th (based on February 2007 numbers). Facebook, which was 8th in February according to Compete, is likely to make a big push as their new platform adds more useful applications for users, giving them less of a reason to ever leave the site.
Why is attention important? Because the more time you have to interact with users, the more chance you have to gather information about them. The more information you have about them, the more useful and personalized you can make your service and the better you can target advertising and capture a users' ecommerce spending. If the web paradigm is indeed shifting from search to personalization, then it would appear that Yahoo! and social networking sites like MySpace and Facebook might be in a better position to take advantage of that than Google.
What do you think? Is search dead? Is personalization the next big thing? Is this a tacit admission of defeat by Yahoo! or is it visionary foresight? Who is in the best position to dominate the personalized web?.
credit :
Read/Write Web
Sunday, July 13, 2008
at
7:13 PM
Posted by
Suresh Kumar A
Google has been developing a new algorithm for indexing textual content in Flash files of all kinds, from Flash menus, buttons and banners, to self-contained Flash websites. Recently, we've improved the performance of this Flash indexing algorithm by integrating
Adobe's Flash Player technology.
In the past, web designers faced challenges if they chose to develop a site in Flash because the content they included was not indexable by search engines. They needed to make extra effort to ensure that their content was also presented in another way that search engines could find.
Now that we've launched our Flash indexing algorithm, web designers can expect improved visibility of their published Flash content, and you can expect to see better search results and snippets. There's more info on the
Webmaster Central blog provides more technical details about the Searchable SWF integration.
Saturday, July 12, 2008
at
5:10 AM
Posted by
Suresh Kumar A
New and Improved iphone 3G from Apple has lot of exciting features. Here are the missing Features, hope will be available in the future version.
- No video recording.
- No card slot.
- No MMS.
- No copy&paste.
- No voice dailing.
- No Stereo Bluetooth.
- No user replacable battery.
- No more online sale and activation.
If i have missed any featues that fit in the above list, please let me know.
Friday, July 11, 2008
at
8:36 AM
Posted by
Suresh Kumar A
Today Adobe announced a series of changes to its emerging web applications platform. The changes include:
--The next version of the mobile Flash runtime will be free of license fees. Adobe also confirmed that the mobile version of the Air runtime will be free.
--Adobe changed its licensing terms and released additional technical information that will make it easier for companies to create their own Flash-compatible products.
--The company announced a new consortium called Open Screen supporting the more open versions of Flash and Air. Members of the new group include the five leading handset companies, three mobile operators (including NTT DoCoMo and Verizon), technology vendors (including Intel, Cisco, and Qualcomm), and content companies (BBC, MTV, and NBC Universal). Google, Apple, and Microsoft are not members. It's not clear to me what the consortium members have actually agreed to do. My guess is it's mostly a political group.
Adobe said that the idea behind the announcements is to create a single consistent platform that lets developers create an application or piece of content once and run it across various types of devices and operating systems. That idea is very appealing to developers and content companies today. It was equally appealing two years ago, when then-CEO of Adobe Bruce Chizen made the exact same promise (
link):
If we execute appropriately we will be the engagement platform, or the layer, on top of anything that has an LCD display, any computing device -- everything from a refrigerator to an automobile to a video game to a computer to a mobile phone.
If Adobe had made the Open Screen announcement two years ago, I think it could have caught Microsoft completely flat-footed, and Adobe might have been in a very powerful position by now. But by waiting two years, Adobe gave Microsoft advance warning and plenty of runway room to react -- so much so that ArsTechnica today called Adobe's announcement a reaction to Microsoft Silverlight (
link).
Also, the most important changes appear to apply to the next version of mobile Flash and the upcoming mobile version of Air -- meaning this was in part a vaporware announcement. Even when the new runtime software ships, it will take a long time to get it integrated into mobile phones. So once again, Microsoft has a long runway to maneuver on.
Still, the changes Adobe made are very useful. There's no way Flash could have become ubiquitous in the mobile world while Adobe was still charging fees for it. The changes to the Flash license terms remove one of the biggest objections I've seen to Flash from open source advocates (
link). The Flash community seems excited (
link,
link). And the list of supporters is impressive. Looking through the obligatory quotes attached to the Adobe release, two things stand out:
--Adobe got direct mentions of Air from ARM, Intel, SonyEricsson, Verizon, and Nokia (although Nokia promised only to explore Air, while it's on the record promising to bundle Silverlight mobile).
--The inclusion of NBC Universal in the announcement will have Adobe people chuckling because Microsoft signed up NBC to stream the Olympics online using Silverlight. So NBC is warning Microsoft not to take it for granted, and Adobe gets to stick its tongue out.
credit :
Mobile Opportunity
Thursday, July 10, 2008
at
10:29 PM
Posted by
Suresh Kumar A
A semi-automated, largely passive web application security audit tool, optimized for an accurate and sensitive detection, and automatic annotation, of potential problems and security-relevant design patterns based on the observation of existing, user-initiated traffic in complex web 2.0 environments.
Detects and prioritizes broad classes of security problems, such as dynamic cross-site trust model considerations, script inclusion issues, content serving problems,insufficient XSRF and XSS defenses, and much more.
Ratproxy is currently believed to support Linux, FreeBSD, MacOS X, and Windows (Cygwin) environments.
For more details and download ,please visit google official site
here.
Wednesday, July 9, 2008
at
11:58 PM
Posted by
Suresh Kumar A
cat
cat tells the system to "concatenate" the content of a file to the standard output, usually the screen. If that file happens to be binary, the cat gets a hairball and the output can be a bit ugly. Typically, this is a noisy process as well. what is actually happening is that the cat command is scrolling the characters of the file, and the terminal is doing all it can to interpret and display the data in the file. The interpretation can include the character used to create the bell signal, which is where the noise comes from. the cat command have the following format.
# cat filename
cd
cd stands for change directory. You will find this command extremely useful. There are three typicla ways you can use this command
- cd .. : Move one directory up the directory tree.
- cd - : Moves to your home directory from wherever you currently are.
- cd directory name : Change to a specific directory. This can be directory relative to your current location or can be based on the root directory by placing the forward slash(/) before the directory name.
cp
cp command is the abbrevation for copy; therefore , this command enables to copy objects. For eg : to copy the file from
file1 to
file2, issue the following command.
# cp file1 file2
find
The
find command will look in whatever directory you tell it to, as well as subdirectories under that directory, for that file specified. In the following example, the
find command searches for files ending with
.pl in the current directory.
# find *.pl
grep
The
grep (global regular expression parse) command searches the object you specify for the text that you specify. The syntax for the following command.
# grep text file
ls
The
ls command lists the contents of the directory. The format of the output is manipulated with options. In the following example, the
ls command, with no options, list all unhidden files (the file that begin with a dot is a hidden file) in a alphabetical order, filling as many column as will fit in the window.
# ls
more
more is a filter for paging through text one screen at a time. This command can only page down through the text, as apposed to less, which can page both up and down through the text.
rm
rm is used to delete specified files. with the
-r option (Warning: This can be dangerous!),
rm will recursively remove files. therefore if as root, when you type
rm -r , all your files will be gone. By default, rm command will not remove directories.
tar
tar is an archiving program designed to store and extract files from an archive file. This tarred file (called as tar file) can be archived to any media including a tape drive and a hard disk. the syntax for the tar command as follows
# tar action optional functions file(s)/director(ies).
vi
vi is an extremely powerful text editor (not to be confused with a word processor). Using
vi,you can see your file on the screen (this is not the case with a line editor, for example), move from point to point in the file, and make changes. But that's where the similarities end. Cryptic commands, a frustrating user interface, and the absence of prompts can all drive you up a wall. Still, if you focus on a few basics, you'll get the job done.
If i have missed any Linux command that fit in the above list, please let me know.
at
2:12 AM
Posted by
Suresh Kumar A
Yes, You can test the AIR Applications in the web browser. AIR uses the same rendering engine as Apple's Safari, so that browser will provide the most accurate results (and it's available on both Mac OS X and Windows, as of version 3). Firefox, which also run on both platforms, should also work as well. Firefox has an additional benefit - its excellent Javascript debugging tools.
Although you could, theoretically, test your applications in Internet Explorer, I would advise against doing so for two reasons.
1) The Javascript may not behave the same in IE as it will in your AIR apps (this is a common Ajax Problem).
2) IE is a notoriously tricky browser that makes even Web development and testing much harder than it should be(in my opinion).
Tuesday, July 8, 2008
at
10:15 AM
Posted by
Suresh Kumar A
As anyone who runs a website these days knows, or should know, the recently enacted CAN-SPAM Act of 2003 makes it incumbent on emailers to either be able to establish a certain type of relationship with an email recipient or to adhere to certain mailing standards if no such relationship exists. Failure to do so can land one in Federal (or state) court.
However beyond that there is the court of Internet public opinion, and beyond even that is the high court of spam filters and spam blocking. Truly, you don’t want to run afoul of any of these.
The safest way to ensure that you stay on the good side of the law, and spam filters, particularly when building a list of email addresses to which you wish to send business, commercial, or other correspondence related to your website, is to follow this simple list of ten DOs and DON’Ts:
DON’Ts:
1. DON’T trap a website visitor’s email address and then add it to a mailing list without their permission.
2. DON’T use other identifying website visitor information, such as IP address, computer name, etc., to ‘reverse engineer’ or otherwise divine or guess at their email address, and then add it to a mailing list.
3. DON’T pre-check a check box which “opts in” to your mailings, requiring the visitor to
uncheck it in order to not receive your mailing or be added to your mailing list.
4. DON’T be coy, cute, or evasive about what your intent and policy are with respect to any email address your visitor provides.
5. DON’T add an email address, even if freely provided, to your mailing list unless you have provided a way for the visitor to clearly indicate that they want to be added to your mailing list, and they so indicate.
DOs:
1. DO state very clearly what you will do with any email address provided by a visitor, including your privacy policy.
2. DO scrupulously adhere to what you have said you will do with their email address, and never, ever share it with someone else without their explicit permission.
3. DO collect and store, with the email address submitted, the source IP address, the date and time of the submission, and any other unique identifying information; store it along with the indication of permission the visitor has provided for you to add their address to your mailing list. I cannot stress this enough. When accused of spamming (and you will be), having this information available to refresh the memory of your accuser, and to prove to your ISP that you were not spamming them, will save your hide. An ounce of prevention here is worth a ton of trying to get off a spam blocking list without this exculpatory information.
4. DO honour opt-out requests religiously, and immediately.
5. DO pick up ISIPP’s
CAN-SPAM Compliance Pack, chock full of practical advice and tips, and even audio speeches from lawyers from the FTC and a major ISP, to make sure that you get, are, and remain CAN-SPAM compliant. If not that, at
least pick up their
CAN-SPAM and You: Emailing Under the Law eBook.
Friday, July 4, 2008
at
12:02 AM
Posted by
Suresh Kumar A
Here are a handful specific techniques for improving your Web security.
- Make it your job to study, follow, and abide by security recommendations.
- Don’t use user-supplied names for uploaded files.
- Watch how database references are used. For example, if a person’s user ID is their primary key from the database and this is stored in a cookie, a malicious user just needs to change that cookie value to access another user’s account.
- Don’t show detailed error messages in the website.
- Reliably and consistently protect every page and directory that needs it. Never assume that people won’t find sensitive areas just because there’s no link to them. If access to a page or directory should be limited, make sure it is.
- Don’t store credit card numbers, social security numbers, banking information, and the like. The only exception to this would be if you have deep enough pockets to pay for the best security and to cover the lawsuits that arise when this data is stolen from your site (which will inevitably happen).
- Use SSL, if appropriate. A secure connection is one of the best protections a server can offer a user.
My final recommendation is to be aware of your own limitations. As the programmer, you
probably approach a script thinking how it should be used. This is not the same as to how it will be used, either accidentally or on purpose. Try to break your site to see what happens.Do bad things, do the wrong thing. Have other people try to break it, too (it’s normally easy to find such volunteers). When you code, if you assume that no one will ever use a page properly, it’ll be much more secure than if you assume people always will.
Thursday, July 3, 2008
at
1:09 AM
Posted by
Suresh Kumar A
As part of PHP’s support for sessions, there are over 20 different configuration options you can set for how PHP handles sessions. Here I’ll highlight a few of the most important ones here. Note two rules about changing the session settings:
1. All changes must be made before calling session_start().
2. The same changes must be made on every page that uses sessions.
ini_set (parameter, new_setting);
For example, to require the use of a session cookie (as mentioned, sessions can work without cookies but it’s less secure), use
ini_set ('session.use_only_cookies', 1);
Another change you can make is to the the name of the session (perhaps to use a more userfriendly one). To do so, use the session_name() function.
session_name('YourSession');
The benefits of creating your own session name are twofold: it’s marginally more secure and it may be better received by the end user (since the session name is the cookie name the end user will see). The session_name() function can also be used when deleting the session cookie:
setcookie (session_name(), '', time()-3600);
Finally, there’s also the session_set_cookie_params() function. It’s used to tweak the settings of the session cookie.
session_set_cookie_params(expire, path, host, secure, httponly);
Note that the expiration time of the cookie refers only to the longevity of the cookie in the Web browser, not to how long the session data will be stored on the server.
Wednesday, July 2, 2008
at
12:47 AM
Posted by
Suresh Kumar A
A brute force attack is an attempt to log into a secure system by making lots of attempts in the hopes of eventual success. It’s not a sophisticated type of attack, hence the name "brute force." For example, if you have a login process that requires a username and password, there is a limit as to the possible number of username/password combinations. That limit may be in the billions or trillions, but still, it’s a finite number. Using algorithms and automated processes, a brute force attack repeatedly tries combinations until they succeed.
The best way to prevent brute force attacks from succeeding is requiring users to register with good, hard-to-guess passwords: containing letters, numbers, and punctuation; both
upper and lowercase; words not in the dictionary; at least eight characters long, etc. Also, don’t give indications as to why a login failed: saying that a username and password combination isn’t correct gives away nothing, but saying that a username isn’t right or that the password isn’t right for that username says too much.
To stop a brute force attack in its tracks, you could also limit the number of incorrect login attempts by a given IP address. IP addresses do change frequently, but in a brute force attack, the same IP address would be trying to login multiple times in a matter of minutes. You would have to track incorrect logins by IP address, and then, after X number of invalid attempts, block that IP address for 24 hours (or something). Or, if you didn’t want to go that far, you could use an “incremental delay” defense: each incorrect login from the same IP address creates an added delay in the response (use PHP’s sleep() function to create the delay). Humans might not notice or be bothered by such delays, but automated attacks most certainly would.
Monday, June 30, 2008
at
9:45 PM
Posted by
Suresh Kumar A
Cookies
Cookies are a way for a server to store information on the user’s machine. This is one way that a site can remember or track a user over the course of a visit. Think of a cookie as being like a name tag: you tell the server your name and it gives you a sticker to wear.Then it can know who you are by referring back to that name tag.
- Cookies are limited to about 4 KB of total data, and each Web browser can remember a limited number of cookies from any one site. This limit is 50 cookies for most of the current Web browsers.
- Each browser treats cookies in its own way. Be sure to test your Web sites in multiple browsers on different platforms to ensure consistency.
- users can reject cookies or turn them off in the Web Browsers.
Sessions
Data is stored on the server, not in the Web browser, and a session identifier is used to locate a particular user’s record (the session data). This session identifier is normally stored in the user’s Web browser via a cookie, but the sensitive data itself—like the user’s ID, name, and so on—always remains on the server.
Sessions have the following advantages over cookies.
- They are generally more secure (because the data is being retained on the server).
- They allow for more data to be stored.
- They can be used without cookies.
Whereas cookies have the following advantages over sessions.
- They are easier to program.
- They require less of the server.
In general, to store and retrieve just a couple of small pieces of information, use cookies. For most of your Web applications, though, you’ll use sessions.
Sunday, June 29, 2008
at
2:41 AM
Posted by
Suresh Kumar A
at
12:10 AM
Posted by
Suresh Kumar A
One of the best and the most widely used graphics tool for linux is GIMP , the GNU Image Manipulation Program. This GIMP is a full featured image editing program with many menus, tools and filters.
Below summarize my ten reason 'WHY I LOVE GIMP'.
- Floating Menus (which you access by right-clicking an image window.
- Graphics Layer (so that effects can be superimposed.
- More than 100 plug-in filters and tools.
- More than 20 editing tools.
- Multiple image windows (for cutting and pasting graphics, or for multiple views of a file).
- Multiple undo levels.
- Scripting language to automate image processing or to create new filters.
- More than six floating tools, brush, colors and pattern windows.
- Support for importing and exporting of 24 graphics format.
- Multi-Platform Support (GIMP runs in windows, linix and MAC).
Friday, June 27, 2008
at
10:09 PM
Posted by
Suresh Kumar A
In the 10 Things blog, Deb Shinder recently pointed out
10 ways you might be breaking the law with your computer and not even know it. There’s yet another way that wasn’t mentioned in that article. Specifically it has to do with recent arrests made by the FBI in suspected child pornography cases.
As has been reported in News.com and elsewhere, the FBI has been recently
employing fake Web sites to lure people into child pornography. A suspect doesn’t have to have any child pornography on his computer either. Merely clicking the link is enough to trigger an investigation, search warrants, and the resultant perp walk, whether or not there was any intent to indeed consume child pornography as part of the clicking.
Improbable cause
But what if the user didn’t know the images were on the computer? Or what if the user didn’t know what the Web site was before it was clicked?
Sorry. That doesn’t count. The link got clicked. The images are on the computer. Go to jail. Go directly to jail. Don’t pass Go. Don’t collect $200.
Certainly such a thing wouldn’t happen, right? The only way someone could go to a kiddie porn site was to find the link and intentionally click it. As an IT leader you do, or should, know better.
There are many, many different ways users can be tricked into clicking things or winding up on sites they shouldn’t have. First, there’s the obvious things that can happen when viruses or other malware redirect browsers to go places they’re not supposed to. Someone could program a simple redirect in a Web site, maybe through something as simple as a clear gif, forwarding a browser to the target Web site. Even something as simple as creating a link in
TinyURL that points to the site.
TinyURL is especially dangerous, because there’s no way to know exactly what the destination address is before the user goes there. It could be an easy tool for one user to use against another as a cruel joke or some form of retaliation.
Read More
Thursday, June 26, 2008
at
12:47 AM
Posted by
Suresh Kumar A
Here are the 8 Reasons 'why i love Ubuntu'?.
- It's free and very fast.
- It's based on Debian and it uses the fastest package manager out there, APT;
- APT package management is sooo easy.
- It's takes about 1 minute to install a software and 10 to 20 seconds to uninstall or completely remove a software.
- Firefox, Open Office, Gimp, and a huge repository of free softwares
- Live CD/Install combo- I liked being able to use the operating system before it even started installing!
- It's KDE-based, i hate Gnome.
- It's perfect for low-end PC's.
THE BEST ever Linux operating system out there; and this way, we can compete with big operating system like Windows and Mac. It's time for Linux users to have a strong, easy to use and powerful desktop operating system, and Ubuntu can help us.
please write your comment WHY YOU LOVE UBUNTU? .
at
12:14 AM
Posted by
Suresh Kumar A
As web applications have become more complex, they have begun to push the boundaries of both the capabilities of the browser and the usability of the application. As their popularity grows, these issues become more apparent and important and highlight the fact that there are still a number of significant issues for both developers and end-users when deploying and using applications within the browser.
The web browser was originally designed to deliver and display HTML-based documents. Indeed, the basic design of the browser has not shifted significantly from this purpose. This fundamental conflict between document- and application-focused functionality creates a number of problems when deploying applications via the browser.
Conflicting UI
Applications deployed via the browser have their own user interface, which often conflicts with the user interface of the browser. This application-within-an-application model often results in user interfaces that conflict with and contradict each other. This can lead to user confusion in the best cases, and application failure in the worst cases.
The classic example of this is the browser's Back button. The Back button makes sense when browsing documents, but it does not always make sense in the context of an application. Although a number of solutions attempt to solve this problem, they are applied to applications inconsistently, and users may not know whether a specific application supports the Back button or whether it will force their application to unload, causing it to lose its state and data.
Distance from the Desktop
Due in part to the web security model (which restricts access to the user's machine), applications that run in the browser often do not support the types of user interactions with the operating system that people expect from applications. For example, you cannot drag a file into a browser-based application and have the application act on that file. Nor can the web application interact with other applications on the user's computer.
Primarily Online Experience
Because web applications are delivered from a server and do not reside on the user's machine, web applications are primarily an online experience. Although attempts are underway to make offline web-based applications possible (through plugins and HTML 5), they do not provide a consistent development model and they fail to work across different browsers, or they require users to install additional extensions to the browser. In addition, they often require users to interact with and manage their application and browser in complex and unexpected ways. However, this is an area where the browser looks to make progress over the next couple of years.
Lowest Common Denominator
Finally, as applications become richer and more complex and begin to push the boundaries of JavaScript and DHTML, developers are increasingly faced with differences in browser functionality and API implementations. Although these issues can often be overcome with browser-specific code, they lead to code that
a) is more difficult to maintain and scale;
b) takes time away from function-driven development of feature functionality.
Although JavaScript frameworks are a popular way to help address these issues, they can offer only the functionality provided by the browser, and often they resort to the lowest common denominator of features among browsers to ease the development model. The result for JavaScript- or DHTML-based applications is a lowest common denominator user experience and interaction model, as well as increased development, testing, and deployment costs for the developer.
As browser continue to mature this lowest common denominator of usable functionality will improve, but this is a process that can take a significant amount of time (often years) as new browsers are released, and old browsers are no longer used.
Does RIA(Rich Internet Applications) is the solution to these problems?
Tuesday, June 24, 2008
at
9:52 PM
Posted by
Suresh Kumar A
Producing code that clearly conveys a developer's intent is key to any well written application. That not only applies to PHP, but every programming language. Developers who emphasize the creation of legible code tend to create applications which are easier to both maintain and expand upon. After seven years of programming in PHP I've worked on a variety of projects where well organized and legible code were set aside for numerous reasons. Some of those reasons include time constraints, lack of experience, lost enthusiasm, misdirected pre-optimizing, and the list goes on.
Today we'll look at three simple methods which are commonly ignored by developers for some, if not all of the reasons described above. First, we'll discuss the importance of clean conditional logic. Second, we'll look at how you can cleanly output blocks html of in PHP. And finally, we'll examine the use of sprintf to convey variables placed in stings more legibally.
Tip #1: Write Clean Logic Statements
Example 1.1: Unclean Conditional Logic<?php
if($userLoggedIn) {
// Hundreds of lines of code
}else{
exit();
}
?>
The above statement seems straight forward, but it's flawed for the reason that the developer is giving this conditional block too much responsibility. I know that might sound a little weird, but stay with me.
The type of conditional organization above makes for unnecessarily complex code to
both interpret and maintain. A brace that's paired with a control structure hundreds of lines above it won't always be intuitive for developers to locate. I prefer the style of conditional logic in example 1.2, which inversely solves the previous example. Let's take a look.
Example 1.2: Clean Conditional Logic <?php
if(!$userLoggedIn) {
exit();
}
// Hundreds of lines of code
?>
This conditional statement is more concise and easier to understand. Instead of stating: "if my condition is met, perform hundreds of operations, else exit the script", it's saying "if my condition is not met, exit the script. Otherwise, I don't care about what happens after that. I am only concerned with stopping execution". So, by doing this, you've limited the operations that a given control structure has been tasked with, and that will help other developers quickly understand your code.
Read More
Monday, June 23, 2008
at
11:13 PM
Posted by
Suresh Kumar A
It didn't take time for me to throw out the
Evolution email client from my
Ubuntu platform. Instead, I installed
Mozilla’s Thunderbird, an email client which I’m very familiar with. What caused the switch? Well, I was trying to configure an email account running on an IMAP server. I had a terrible time in getting it to work. After so many unsuccessful tries, it was time for me to kiss the Evolution package goodbye. I’m glad it’s gone because Thunderbird is working just fine in Ubuntu.
Feature | Evolution 2.6 | Thunderbird 1.5.0.5 |
Protocols | POP, IMAP, Exchange, Hula, Files | POP, IMAP, Files |
Security | SSL and TLS | SSL and TLS; only available by Edit->Acccount Settings->Security Settings after creating account |
LDAP | Yes | Yes |
E-mail Security Options | | |
Load Images | Can be disabled, enabled for all, or enabled for known contacts; default: enabled | Can be disabled, enabled for all, or enabled for known contacts; default: enabled |
JavaScript | No | No |
E-mail Scam Detection | No | Yes |
Antivirus Support | No | Yes; default: disabled |
Additional Functionality | | |
Address Book | Yes | Yes |
Calendar
| Yes | No; available as plug-in |
Task List | Yes | No |
Memos | Yes | No |
at
10:01 PM
Posted by
Suresh Kumar A
Murphy’s Law states whatever can go wrong will go wrong. So even with the appropriate logging and monitoring measures in place accidents are bound to happen. Even with warning and confirmation screens a legitimate user can still delete information they didn’t really intend to. The problem with DELETE statements is that they are irrecoverable.
One suggestion to prevent the unintentional deletion of data stored in a database is to add a new field to the records named IS_DELETED . The field is a TINYINT(1) which contains either a 0 or 1 to denote if the record is considered deleted . Your application would not issue any actual DELETE queries, rather it would set the field value to 1. It’s trivial to change the value of the field to restore the record in case of an accident.
Depending on the type of application you are developing, you may not want to have stale data in the database. To prevent deleted records from accumulating in the table you can write an administrative script that can run weekly (or even nightly) from cron or Scheduled Tasks to actually delete the records.The code below shows a script that I use.
A SHOW TABLES query is issued to retrieve a list of all tables in the database. For each table name that is returned, the column names are retrieved with a SHOW COLUMNS query and scanned to find any names that are named IS_DELETED . If so then a true DELETE query is issued otherwise the script moves on to analyze the next table.
#! /usr/bin/php
<?php
include ‘../lib/common.php’;
include ‘../lib/db.php’;
// retrieve list of tables
$table_result = mysql_query(‘SHOW TABLES’, $GLOBALS[‘DB’]);
while ($table_row = mysql_fetch_array($table_result))
{
// retrieve list of column names in table
$column_result = mysql_query(‘SHOW COLUMNS FROM ‘ . table_row[0], $GLOBALS[‘DB’]);
while ($column_row = mysql_fetch_assoc($column_result))
{
// if the table has an IS_DELETED field then delete old records
if ($column_row[‘Field’] == ‘IS_DELETED’)
{
mysql_query(‘DELETE FROM ‘ . $table_row[0] . ‘ WHERE ‘ . ‘IS_DELETED = 1’ , $GLOBALS[‘DB’]);
// break out to process next table
mysql_free_result($column_result);
break;
}
}
mysql_free_result($column_result);
}
mysql_free_result($table_result);
mysql_close($GLOBALS[‘DB’]);
?>
Sunday, June 22, 2008
at
8:22 AM
Posted by
Suresh Kumar A
The greatest weakness in many PHP programs is not inherent in the language itself, but merely an issue of code not being written with security in mind. For this reason, you should always take the time to consider the implications of a given piece of code, to ascertain the possible damage if an unexpected variable is submitted to it.
<?php
// remove a file from the user's home directory... or maybe
// somebody else's?
unlink ($evil_var);
// Write logging of their access... or maybe an /etc/passwd entry?
fwrite ($fp, $evil_var);
// Execute something trivial.. or rm -rf *?
system ($evil_var);
exec ($evil_var);
?>
You should always carefully examine your code to make sure that any variables being submitted from a web browser are being properly checked, and ask yourself the following questions:
- Will this script only affect the intended files?
- Can unusual or undesirable data be acted upon?
- Can this script be used in unintended ways?
- Can this be used in conjunction with other scripts in a negative manner?
- Will any transactions be adequately logged?
By adequately asking these questions while writing the script, rather than later, you prevent an unfortunate re-write when you need to increase your security. By starting out with this mindset, you won't guarantee the security of your system, but you can help improve it.
You may also want to consider turning off register_globals, magic_quotes, or other convenience settings which may confuse you as to the validity, source, or value of a given variable. Working with PHP in error_reporting(E_ALL) mode can also help warn you about variables being used before they are checked or initialized (so you can prevent unusual data from being operated upon).
at
2:01 AM
Posted by
Suresh Kumar A
part11. Use Javascript alerts to indicate the values of variables.
There are three families of values you'll need to confirm:
- Values received by a function:
alert(value);
Use this for any Javascript function, like the one triggered by the HTML event or the one called when the PHP script returns its value.
- Values returned by the PHP script:
alert(ajax.responseText);
Since responseText stores the data you'll deal with in the Javascript, confirming its value is a great debugging technique.
- Values to be assigned to HTML elements:
alert(message);
You could also have problems in writing new HTML to the web page.In such cases, you'll need to confirm if the problem is in the message being written (tested using such a alert) or in the writing process itself(i.e., assigning a value to innerHTML).
2. Make sure you reload your Web browser after making changes. Failure to do so in common, and very frustrating, mistake.
3. Test with multiple browsers. With Javascript and HTML, different browsers can behave differently, so see how your applications behave in multiple browsers.
4. Watch the method --
GET or
POST-- being used. Some browsers (I'm looking at you Internet Explorer) cache
GET page requests, so it might look as if the changes you made didn't take effort.
5. Use a Javascript console. Good browsers, like Firefox and Safari, can show Javascript errors in a separate window.
6. Use a Javascript debugger. Firefox users benefit greatly from the Venkman debugger
(www.mozilla.org/projects/venkman). Internet Explorer users have the Microsoft Script Debugger.
at
1:09 AM
Posted by
Suresh Kumar A
In the most part, Linux is engineered in a fashion that makes it hard for viruses to run. Also, because more PCs currently run Windows, it is more worthwhile writing viruses for the Windows platform. However, there are many reasons you might want a virus scanner on your Linux PC:
to scan a Windows drive in your PC
to scan Windows machines over a network
-
to scan files you are going to send to other people
to scan e-mail you are going to forward to other people
some Windows viruses can run with Wine.
Open Source Antivirus
Free version of commercial Antivirus
Friday, June 20, 2008
at
12:30 AM
Posted by
Suresh Kumar A
There are a few basic steps to maintaining a secure Ubuntu system:
- Don't use root - The default Ubuntu installation does not assign a root password and you cannot log in as root. Instead, the default user account can use Sudo to run commands as root. Additional user accounts cannot even run Sudo unless they are given explicit permission. Restricting root access limits your ability to accidentally (or intentionally) screw up the entire operating system.
- Limit network services - Only enable services that you need. If you don't need a mail server, then don't install one. If you do not host web pages, then don't install a web server. Attackers can only exploit network services that are running on your system.
- Use trusted software sources - There are literally hundreds of unofficial repositories. Installing software from an unknown and untrusted repository could result in the installation of hostile software. Don't change the default repository settings or install software from untrusted providers unless you know what you are doing. Remember: just because they say it is safe does not mean it really is safe.
- Limit scripts - web browsers, chat room software, and other programs can transfer potentially hostile software from the network, download files, and run programs. If you don't need this functionality, then disable it.
- Use strong passwords - If you are the only person with physical access to your computer and you do not allow remote network access, then you can probably get away with having abcd or your pet's name as your password. (One of my home computers is usually logged in and the screen saver does not demand a password-this is as effective as having no password.) However, if you are in a corporate environment with many users, or enable remote access, or are at home with young kids (or cats) who like to press the delete button, then consider a strong password. Please visit this link to know how to choose a strong password .
- Programs like John the Ripper (sudo apt-get install john) are designed to crack passwords through dictionary attacks and common password patterns like the ones listed above. In my experience, John can crack about 20 percent of user-chosen passwords in the first few minutes, and up to 80 percent in a few hours. The best passwords will not be based on dictionary words or simple patterns, and will be memorable. Good passwords should make sense to only you and not anyone else.
- Don't compromise your security - Telling people "I have a really cool password-it's my student ID number from high school and nobody will guess that!" is a huge hint to an attacker. Don't hint at your password, don't e-mail it, and don't tell it to anyone in public. If you think that somebody might have a clue about your password, then change it immediately. Remember: the only person inconvenienced by a password change will be you. Beyond passwords, don't give accounts with Sudo access to anyone, don't install software from strangers, and don't run with scissors. Your security is as strong as its weakest link, and that is often the user.
Thursday, June 19, 2008
at
1:47 AM
Posted by
Suresh Kumar A
Under Linux, it is possible to run a number of Windows applications without having Windows installed at all. This is done with Wine. I'm not talking about the fermented beverage some of us are quite fond of, but a package that runs on Linux. Allow me to paraphrase from the Wine Web site .
Wine Is Not an Emulator. Wine is a compatibility layer, a set of APIs that enable some Windows applications to operate on a Linux system running the X window system (the Linux graphical environment).
Wine will not run every Windows application, but the number of applications it is capable of running is increasing all the time. Some commercial vendors have ported certain Windows applications to Linux by making some of the code run in Wine. This has sped up the normal production cycle and made it possible for them to get their programs to Linux users faster. If you really need to run a Windows application under Linux and you would like to go this route, the commercial Wines tend to be a better approach.
Many Linux distributions include a version of Wine on the CDs, and some let you select Windows compatibility applications as part of the installation procedure. Keep in mind that the newer your Wine, the better. For the latest and greatest on Wine development, visit the Wine web site (
http://www.winehq.org). A great deal of Wine development is being done at CodeWeavers (
http://www.codeweavers.com). Its version provides an installation wizard to guide you through the installation and configuration process for Wine. It makes the whole process extremely simple.
VMwareThe Wine project has done some impressive work, but it will not run all Windows applications. Sometimes you just need to run the whole shebang, and that means a full copy of Windows. Because you don't want to boot back and forth between Linux and Windows, it would be great if you could run Windows entirely on your Linux machine. This is the philosophy behind VMware—and it doesn't stop there.
Mware enables you to create virtual machines on your computer. Complete with boot-up BIOS and memory checks, VMware virtualizes your entire hardware configuration, making the PC inside the PC as real as the one you are running. Furthermore, VMware enables you to run (not emulate) Windows 95, 98, 2000, NT, FreeBSD, or other Linuxes. For the developer or support person who needs to work (or write code) on different platforms, this is an incredible package. Yes, you can even run another Linux on your Linux, making it possible to test (or play with) different releases without reinstalling on a separate machine. VMware knows enough to share your printers, network cards, and so on. You can even network between the "real" machine and the virtual machine as though they were two separate systems.
VMware comes in a variety of packages and price points. Visit the VMware Web site (
http://www.vmware.com) for details.
Win4LinAnother alternative still requires a licensed copy of Windows. Win4Lin, formerly Netraverse (
http://www.win4lin.com), sells a package called (you guessed it) Win4Lin. This is a package designed to let you run Windows on your system but, unlike VMware, only Windows. The classic Win4Lin product only supported Windows 95, 98, and ME, but with the introduction of Win4Lin Pro, Windows 2000 and XP are also supported. It is, however, somewhat less expensive than VMware. Once again, remember that because you aren't emulating Windows but actually running a copy, you still need that licensed copy of Windows.
Win4Lin's magic is performed at the kernel level. Consequently, this requires that you download a patched kernel equivalent to what you are currently running or that you patch and rebuild your own. If you have compiled custom drivers into your kernel, you are going to have to go through the process again to get Win4Lin going. This whole process is no longer necessary if you choose to purchase Win4Lin Pro.
Wednesday, June 18, 2008
at
12:40 AM
Posted by
Suresh Kumar A
Inorder to change your Ubuntu Desktop environment from Gnome to KDE: please follow the below steps.
Tip - Many Gnome applications only need the Gnome libraries to run. If you keep both desktops on the same system, then you can use many of the applications under the same desktop.
Tuesday, June 17, 2008
at
12:57 AM
Posted by
Suresh Kumar A
Let's see how Ubuntu differs from other Linux Distributions - Redhat or Fedora, Debian ,Suse ,Knoppix.
If you log into the command line of both an Ubuntu system and a Red Hat Enterprise Linux or Fedora system, very little will look different. There are common directories and utilities between the two, and functionality is fundamentally the same. So what makes Ubuntu different from other Linux distributions?
One difference is the
installer.
The complexity of booting and installing Ubuntu has been narrowed down to a handful of mouse clicks, making many of the install decisions automatic based on assumptions as to what the average user may need and want. In contrast, a Red Hat system presents the user with many install options, such as setting up a workstation or server, individually selecting packages to install, and setting administrative options.
Another major difference among Linux distributions is in
software management tools. The aim of the utilities and packaging systems is the same for Debian as for other Linux distributions, however the operation and implementations are significantly different. Ubuntu and most other Debian-based systems use the
APT (Advanced Package Tool) family of utilities for managing software. You use APT to install, remove, query, and update Debian (deb) packages. Red Hat uses an
RPM packaging system to handle the same tasks with its rpm packages.
Another big difference is the way the systems look in regards to initialization, login
screen, default desktop, wallpaper, icon set, and more. From this look-and-feel perspective, there are a lot of differences. Although Red Hat and Ubuntu both use the
GNOME desktop as the default Window Manager, the GUI tools used for administering the system and their locations on the drop-down menus are entirely different.
The login screen and autumn-colored theme of a default Ubuntu system set it apart from other distributions as well. When you drop down the menus of an Ubuntu desktop, you are not presented with a huge list of applications and utilities. What you get is a rather simple and elegant mixture of some of the best and most functional applications available for the Linux desktop. This approach is characteristic of Ubuntu and is done with the intent of keeping the user from feeling overwhelmed.
Another unique characteristic of a Ubuntu system is the intentional practice of locking the root user account, Most Linux distributions require the user to log in or
su to root to perform administration tasks, however a user on a Ubuntu does this through
sudo using their own login password, and not a separate one for the root user.
Do you have any comparison to share? Please post them in the comments below.
Monday, June 16, 2008
at
1:27 AM
Posted by
Suresh Kumar A
Because important information is normally stored in a session (you should never store sensitive data in a cookie), security becomes more of an issue. With sessions there are two things to pay attention to:
1) Session ID
2) Session data
A malicious person is far more likely to hack into a session through the session ID than the data on the server, so I’ll focus on these things here
The session ID is the key to the session data. By default, PHP will store this in a cookie, which is preferable from a security standpoint. It is possible in PHP to use sessions without cookies, but that leaves the application vulnerable to session hijacking: If I can learn another user’s session ID, I can easily trick a server into thinking that their session ID is my session ID. At that point I have effectively taken over the original user’s entire session and would have access to their data. So storing the session ID in a cookie makes it somewhat harder to steal.
One method of preventing hijacking is to store some sort of user identifier in the session, and then to repeatedly double-check this value. The HTTP_USER_AGENT — a combination of the browser and operating system being used—is a likely candidate for this purpose. This adds a layer of security in that one person could only hijack another user’s session if they are both running the exact same browser and operating system.
Next Post - As a demonstration of this, let’s see an example.
at
12:43 AM
Posted by
Suresh Kumar A
I came across Leopard's new features at
Apple's Web site, I thought I'd talk about my experience a little bit.
- One of the big areas of improvement are the visuals, like the 3D Dock and transparent menu bar.
- Time Machine has gotten a lot of press and is one of the things that I appreciate most about Leopard. This is a backup utility that's really smart and mindless to use. It automatically backs up your entire hard drive to an external disk, even over a network. I like the fact that it automatically keeps hourly backups for the past day, daily backups for the past month, and weekly backups as long as space allows. I was pretty good about backing up regularly using Backup (Apple's program) but this is easier and certainly better. It could be a little more configurable but maybe keeping options to a minimum is what makes it so easy for anyone.
- Spaces is another big, cool feature. It creates multiple desktops, so you can have applications and windows that only appear in a certain "space". Maybe one for work, one for personal stuff, etc. I used it a while, then stopped. I will probably use it again when I get the time to really master it.
- Some people hate the new Stacks feature, some people like it. It's just a different way for folders in the Dock to appear when you click on them. I think they're really nice, personally.
- The Finder windows have built-in searches on the left side, which is useful, although I rarely remember to use them.
- QuickLook feature is excellent, particularly when it comes to email attachments. I'm not really using the changes in Mail, Safari (I primarily use Firefox), and iChat, or doing anything with the Parental Controls and Boot Camp (I use Parallels). But overall, I've been much more pleased with Leopard than I expected to.
Do you have any features to share? Please post them in the comments below.
Sunday, June 15, 2008
at
7:26 AM
Posted by
Suresh Kumar A
Ever since the first Linux distributions appeared, people have been having a hard time trying to choose the "right one" to use.
Many people end up asking "Which distribution should I use?" on the web, only to receive heaps of different suggestions (usually just the distributions that the posters like), a few arguments, and inevitably, the RPM vs DEB debate.
The problem is, that even after you filter out the posts to just the suggestions of distributions, you will find that you end up with just a big list of distributions, with usually only a comment like "This is good" to guide you in your choice.
This is a really bad way to choose a distribution, since you have no real advice on WHY you should choose distribution X over distribution Y. This article aims to give you the advice you need to choose the distribution that best suits you.
DISTRIBUTION PURPOSE
One of the key things in choosing a distribution is what you are using it for. Most uses fall into one of the 3 categories below:
- Desktop usage.
- Desktop and Server usage.
- Server usage.
"Desktop usage" or "desktop distribution" is a very commonly used term to describe a Linux distribution which provides a GUI and is suitable for usage on desktop or laptop computers.
DESKTOP DISTRIBUTIONIf you want a desktop distribution, some of the main requirements are:
- Ease of adjusting settings - in the case of laptops, easy network changing is important.
- Age of the software (you want the programs to be fairly recent)
- Range of GUI applications.
SERVER DISTRIBUTIONIf you are looking for a server distribution, you want to look for:
- Software api stability - do updates ever change the way the distribution works mid-release?
- Software life - how long will it get updates?
- Security - servers are often open to the public - it needs to be very well secured.
Do you have anything to share? Please post them in the comments below.
Article Source -
reallylinux.com
Saturday, June 14, 2008
at
9:55 AM
Posted by
Suresh Kumar A
Here are some tips and things I've found to be true after 5 years of software experience.
Follow Einstein's maxim; keep things as simple as possible but not too simple.
If you are having difficulty coding your design, stop - you are probably going about it the wrong way, rethink the design.
Get help if you need it, but try wherever possible to work things by yourself, you will learn faster; it's not good having someone write code that you don't understand!.
Bear in mind that most importantly, the programmer must have a clear idea of what is required before coding commences. In a simple gauge this may mean just keeping what's needed in mind, in complex projects you must have to write your intentions or even make a flow chart of how things will work. Believe me, you will save much time and frustration by working this way, not to mention the umpteen versions of scrapped code!.
Software like painting, is a creative endeavour; there are many things to achieve the same end result; If you asked two artists to paint a horse the results would look like a horse but would not look as same each other, software is like that and the best software will achieve the result with the minimum possible code.
As you become proficient at coding you will find yourself getting good results quickly with elegant code, looking back at your early efforts will make you laugh - laugh heartly, it's a good sign.
Do you have anything to share? Please post them in the comments below.
at
12:19 AM
Posted by
Suresh Kumar A
One story not told often enough involves Linux’ growing domination of the embedded market.
In this space Linux usually stacks up against older Real Time Operating Systems (RTOS). The decision by
Wind River, the largest RTOS vendor, to
migrate toward Linux was a turning point.
There has been no turning back. But this is not an open source story. In fact, the embedded Linux business looks a lot like the rest of the embedded market.
Here is an example,
Timesys providing subscriptions to its LinuxLink in order to help
Tensilica customers get to market faster. Tensilica calls this a
strategic partnership, alongside a deal with
Embedded Alley Solutions to provide consulting and training.
The deals are not noteworthy in themselves, except that they point to how Linux has become the mainstream embedded technology of choice.
It’s Linux’ modular design, a kernel whose features designers can pick-and-choose among, which is causing this revolution in embedded systems.
As chip densities increase manufacturers outgrow the old RTOS systems, and a full-fledged operating system delivers better time to market. Microsoft is not considered viable because it lacks this key modularity, and is years from implementing it.
Microsoft is only now talking about a kernel-based design in
Windows 7, which is still in the planning stages.
Outside branded areas like game machines or phones, the embedded market will have sailed away from Redmond long before it’s serious about it.
Do you have anything to share? Please post them in the comments below.
Article Source -
blogs.zdnet.com
Friday, June 13, 2008
at
2:10 AM
Posted by
Suresh Kumar A
The browser has become the preferred way for delivering many applications because it allows easy deployment across operating systems and simplified application maintenance. Plus, the modern programming languages used in the browser enable rapid application design and development.
The Adobe® AIR™ runtime complements the browser by providing the same application development and deployment benefits while adding desktop integration, local data access, and enhanced branding opportunities. An emerging design pattern for rich Internet applications (RIAs) is to deliver a browser-based version of an RIA in the browser for all users and an RIA on the desktop for more active users.
Feature | RIAs in the browser | RIAs on the desktop |
Installation | No application installation is necessary. | Applications install seamlessly from the browser or download and install like a traditional desktop application. |
Application delivery | Applications can be easily discovered, explored, and used. | Installed applications have more persistence, power, and functionality. |
Application updates | Applications are updated by pushing new content to a website. | AIR provides APIs that allow applications to be updated as easily as pushing new content to a website. |
Multiple operating system support | Applications run on multiple operating systems and browsers. | AIR applications are cross-platform, so they can be installed on and run on multiple operating systems. |
Programming languages | JavaScript is provided by browsers and ActionScript™ is provided by Adobe Flash® Player. | Integrated JavaScript and ActionScript virtual machines are compatible with the browser. |
Background capability | RIAs can run only in a visible browser window. | Applications can run in the background or provide notifications like traditional desktop applications. |
Persistence | Activity is limited to the browser session. When the browser is closed, information is lost. | RIAs are installed and available on the desktop. They store information locally and operate offline. |
Desktop integration | Applications are sandboxed, so desktop integration is limited. | Applications can access a desktop file system, clipboard, drag and drop events, system tray/notifications, and more. |
User interface control | RIAs run within a browser window that has its own controls, branding, and integration with the desktop. | RIAs have a customizable user interface and desktop integration, enabling branded experiences. |
Data storage | Applications have limited local storage, which the browser can destroy. | Applications have unlimited local storage and access to a local database, plus encrypted local storage. |
at
1:20 AM
Posted by
Suresh Kumar A
Ajax is, without a doubt, preety cool, but what's cool isn't always what's best (despite what you thought in high school). As with any technology, employ Ajax because you should (when it adds useful features without adding more problems and excluding users), not because you can or know how.
Since Ajax relies upon Javascript, one potential problem is that not all users enable Javascript and it can run differently on different browsers. A well-implemented Ajax example can work seamlessly on any browser, but you really need to be thorough. You can also create a non-Ajax version of a system for those with Javascript disabled: not difficult, but again, something you do need to think about.
Another problem is that Ajax renders the browser's history feature unusable. For that matter, you can't bookmark Ajax pages the way you can search results (the page itself can be bookmarked, but not after some interaction). So by adding functionality, your Ajax application will remove common features.
And Ajax request still require a server connection and the data transfer, so they don't save any resources, just reallocate them.
Finally, I'll point out that there's an argument to be made that IFrames offer similar functionality to Ajax but without some of its downsides.
Thursday, June 12, 2008
at
11:25 AM
Posted by
Suresh Kumar A
I am using Linux (Fedora Core 6) on a pretty high end hardware (at least when I bought it) - Intel Core 2 Duo E6600 Conroe 2.4GHz (4M shared L2 Cache) with 2 GB DDR2RAM, nVidia dual-head graphics card for over a year now. And yet a simple change made it at least 20-40% faster. Even my firefox (with 100+ tabs always open) feels much faster. So what is this magic change?
I switched to Xfce desktop from Gnome desktop (default). That’s it folks!
Contrary to popular belief Xfce doesn’t only make low end hardware faster, it makes pretty high end hardware faster too and by a significant margin. I also didn’t notice any UI issues after migration. Yes, the desktop looks a little different but you can easily get used to it.
Try it, you won’t regret it, especially if you are a power user.
Notes:
1. You can always switch your desktop environment to a different one while logging in by changing your session.
2. You get same applications in both environment.
at
4:46 AM
Posted by
Suresh Kumar A
The first release candidate of Mozilla’s Firefox 3 web browser has been out for almost a month but the company had been quiet about when the final version would be released. Instead, all we got was that it would ship “
when it’s ready.” Well now we know: it’ll be ready next week.
The final version of
Firefox 3 will ship on June 17. This represents 34 months of active development on the follow-up to its hugely successful Firefox 2.
While Microsoft still controls the web browsing market with its Internet Explorer product, Firefox has been making steady inroads. The browser is on track to surpass a 20 percent worldwide market share in July — something that could be achieved even earlier with this new version 3 release. This is pretty incredible when you consider that as recently as 2003, Internet Explorer had over a 94 percent market share.
The next version of Internet Explorer, version 8, is currently beta testing. A second beta version is not expected until August, meaning the final version is unlikely to ship until the end of 2008 or possibly 2009
As a Mac user, I found myself no longer using Firefox with the release of Firefox 2. It was simply too slow on my machine. Instead I opted for the Mozilla browser specifically built for the Mac, Camino, and Apple’s own Safari web browser. However, in beta testing Firefox 3 for the Mac, it appears the Mozilla team has significantly improved the speed of the browser. Quite frankly, when compared with Firefox 2, version 3 flies.
Mozilla is attempting to set a Guinness World Record by making Firefox 3 the most downloaded software in a 24-hour period. I have a feeling they’ll make it. More info on how to participate here.