Bryley Systems again named Top 501 MSP

Bryley Systems Inc. Ranks in Top 501 Managed IT Service Providers (MSPs) Worldwide for the Third Consecutive Year

9th Annual MSP 501 Ranking and Study Identifies Bryley Systems as one of the
World’s Most Progressive MSPs in Information Technology

June 7, 2016:  Bryley Systems of Hudson MA, ranks at #350 of the world’s most progressive, Managed IT Service Providers, according to Penton Technology’s 9th-annual MSPmentor 501 List.

“On behalf of Penton and MSPmentor, I would like to congratulate Bryley Systems for its recognition as an MSP 501 honoree,” said Aldrin Brown, Editor in Chief, MSPmentor. “The managed IT service provider market is evolving at a rapid pace and the companies showcased on the 2016 MSP 501 list represent the most agile, flexible and innovative organizations in the industry.”

Bryley has ranked in the MSP 501 over three consecutive years, starting at #440 in 2014, moving to #462 in 2015, and advancing to #350 in 2016.

In conjunction with the MSPmentor award, Bryley Systems is #308 on the Clarity Total IT Services Provider (TSP) List, which ranks MSPs in their ability to provide complete solutions to their clients.

“Demand for Bryley’s services is being driven by IT complexity, the need for end-user support, security concerns, and compliance requirements.” said Gavin Livingstone, President of Bryley Systems.  “We are pleased to once again rank in the MSP 501; it is a great honor and demonstrates our dedication to remain one of the top providers of managed IT services worldwide.”

We’re looking for field-service and business-development employees

Bryley Systems is growing.  We have an immediate need for field-service personnel with IT experience and plan to add a person to our business-development team.

Interested applicants may email HR@Bryley.com or call 978.562.6077.

Raymond Baldez joins Bryley Systems!

Bryley Systems is pleased to announce the recent addition of Raymond Baldez to its Technical Services team.  Mr. Baldez is assuming the role of IT Support Technician and has previous experience with All IT Support in Boston.  He is a graduate of ITT Technical Institute, Cisco Networking Academy, and Worcester Technical High School.

Bryley Basics: How to identify the ransomware source on a computer network

Mike Carlson and Gavin Livingstone, Bryley Systems Inc.

Mike Carlson, CTO and a young, 20-year employee at Bryley Systems, had these suggestions on what to do when you get ransomware on your computer network:

  • Identify the end-user login name associated with the ransomware “How to decrypt” text files that are placed in the shared folders. (You would look at the properties of all of these text files to determine the originator.)
  • Remove this end-user’s workstation from the network immediately; preferably disconnect the network cable, but, if not feasible, power it down.
  • Restore all encrypted files from backup.
  • Erase the infected workstation(s) completely, then rebuild it.

In addition, we offered these suggestions in our July 2015 Bryley Information and Tips (BITs):

  • To be prudent, change online and system passwords
  • Create forensic images of infected computers
  • Preserve all firewall, Intrusion Prevention, and Active Directory logs for potential analysis by law-enforcement officials

These three can’t hurt, but the first one won’t stop the next attack and the last two are a bit of a stretch; it seems unlikely that the criminals will ever be pursued unless they happen to be working in this country (which also seems unlikely).

The US Computer Emergency Readiness Team (US-CERT) defines ransomware, its variants, and some solutions at Alert TA16-091A, Ransomware and recent variants.

Search Engine madness

Lawrence Strauss, Strauss and Strauss

A long time ago in the Information Age, there was Yahoo!. Yahoo! was the work of Jerry Yang and David Filo, grad students at Stanford, and was a guide to the soon-to-be-bursting-out World Wide Web. Here is a snapshot of an early version of Yahoo!, when there were about 200,000 websites (now there are around a billion).

Yahoo! was the work of people, who spent their time looking for interesting sites on the Web and, when they found something of value, the discovered site would make the Yahoo! list, sometimes with a brief, opinionated review of what to expect on a visit. And an opinionated review is what netizens sought to deal with the voluminous web: What do the people at Yahoo! think is a good resource for any given subject?

But when the sites and the pages ballooned in the mid-’90s, it begged for developers to write software-based means to reveal the Web’s contents in a helpful way. And the engineers adapted database-sorting software to the task, authoring Lycos, Overture, Excite and Alta Vista. AOL was the most popular way to access the Internet at the time. And so it was imitated, generating what became known as “portals”. Each of those software search engines, one by one, tried to follow AOL’s model, and tried to each create a content-rich site so visitors would theoretically never have to leave1.

Google, also developed by students at Stanford, Larry Page and Sergey Brin, had a different approach. Google emerged from this trend of bloated interfaces as a bare-bones search engine. Google also incorporated a different technology, Page Rank. Page Rank aided in prioritizing search results not just on the basis of the page’s content, but also on the basis of how often it is linked to by other web pages. The thinking behind this was that a good resource will be highly valued by others and so these others will naturally want to link to it on their web pages. Google uses a combination of methods to arrive at its results to a given search. And Google, so confident it would lead visitors to the right answer, included an “I’m Feeling Lucky” button to take a visitor directly to the top item on the search result’s page.

Google’s technology and approach left the others in the dust … and now we are in an age in which Google is nearly the only major search engine left. And while still extant, CEO Marissa Mayer is selling Yahoo! for parts.

Today, it’s estimated that 80% of the time that we search the web, we Google.

(See comScore’s comScore Releases February 2016 US Desktop Search Engine Rankings and Search Engine Land’s Who’s Really Winning The Search War? By Eli Schwartz on 10/24/2014.) The other options include Microsoft’s successfully relaunched Live Search, now known as Bing (and on Yahoo’s site, branded as Yahoo! search), which has search engine traffic around 20% of Web searches. And there are lesser-known search engines like DuckDuckGo (although growing because of its privacy aims, it’s mostly a Bing-derived search2 and represents less than 1% of searches) and similar and even less frequently used Google-derived privacy protected search, such as at ixquick.com.

The Business of Searching for Business

Although popularly dubbed Web 2.0 around ten years ago and 3.0 more recently3, people still use the web to do most of the same things as in the ’90s. And 70% of the time, we start with a web search (per the 2014 research of 310 million web visits by web content-creating company, Conductor, in the Nathan Safran article: Organic Search is Actually Responsible for 64% of Your Web Traffic). So search is important to businesses who want to use the web to get searchers to consider their services.

And not only is the top position potentially lucky for the Google searcher, according to a study by ad network, Chitika, that top position in the search-results page is clicked 33% of the time (from the article No. 1 position in Google Gets 33% of Search Traffic by Jessica Lee). So, no wonder there is an industry, SEO (Search Engine Optimization), to try to get pages in that top position.

As a result of the desire for the top position, there is an ongoing cat and mouse game between makers of web pages (or their SEO contractors) and search engines. The makers are the cats who want to catch that elusive mouse of top-of-page placement when someone searches using the ideas that connect to their service.

One of the first examples of this game was the infamous Meta name=’keywords’. Created by the World Wide Web Consortium (W3C) in the ’90s out of a desire to get useful indexing information to the search engines the Meta Tag, Keywords could contain a list of words that would help a search engine’s software robot have ready access to the important ideas on a given page4. Only problem was how quickly web-page-writers tried to stuff (aka spam) the Keywords tag with words the writer thought would make it rise to the top of the pack of search results (and I’ve seen some ridiculous things like porn words placed by an “SEO expert” in the Keywords meta tag of a retailer).

In 2002, Alta Vista’s John Glick said, “In the past we have indexed the Meta keywords tag but have found that the high incidence of keyword repetition and spam made it an unreliable indication of site content and quality.” (See the Search Engine Watch article Death of a Meta Tag by Danny Sullivan on 9/30/2002.) And Alta Vista was one of the last to support the Keyword tag.

And this game goes on today, only the venue changes. Google just announced that it is delisting or downgrading sites that have outbound links it considers illegitimate (these links were intended to boost the Page Rank of the page being linked to). In the current case bloggers were linking to sites in exchange for gifts. Google discovered the pattern of behavior and exacted penalties on the offending bloggers’ sites. (See the Search Engine Land article Google’s manual action penalty this weekend was over free product reviews by Barry Schwartz on 4/12/2016.)

Google is our (mostly) sole arbiter of the content of the voluminous web that we access by its rankings in importance (aka software-derived opinionated review). And an opinionated review is what netizens seek in order to deal with the voluminous web: What does the Google engine think is a good resource for any given subject? Which of course sounds a lot like trying to appeal to David and Jerry’s Yahoo!: Fundamentally the rules that applied to catching Yahoo’s favor are the rules that apply to winning Google’s highest ranks.

Next installment: How the Web is Won.

Notes

1Keeping visitors was valuable two ways. In lieu of a truer model, a site’s “eyeball” count was a measure by which too many web-based companies’ valuation went stratospheric. Also ad revenues were based on the traditional media-derived model of cost per impression.

2DuckDuckGo’s search is not identical to Bing in the way Yahoo’s is, as of this writing. DuckDuckGo, per its own site, claims to have its own web robot collecting information on web pages and also aggregates information from disparate sources, chiefly Bing, and uses a proprietary method to weigh the importance of information from all the sources.

3Web 2.0 was to indicate increased content coming from web users (e.g. blogs and YouTube channels). Web 3.0 is a Web-inventor, Tim Berners Lee, proposal to increase and change the nature of the web’s html language to include access to additional code and computer languages so that computers can process data in the html, it’s designed so that both humans and machines can make use of the content in a way native to each. (See the W3C standards on Semantic Web.)

4Meta Tags or Metatags are mostly hidden html content. These include a page refresh function and page-content description.

Recommended Further Reading:

  • The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture by John Battelle.
  • Googled: The End of the World As We Know It by Ken Auletta.