Phone Number Search


Reverse Phone Lookup

Caller ID for the 21st century! Just enter a phone number:


Example: 555-555-5555

Find out who’s calling you

Just type in a phone number to see who it is. Results delivered instantly to your computer within seconds.

Stop annoying callers

Tired of telemarketers, car warranty scams, or harassing callers? Use Phone Detective to put an end to the noise.

Powerful people search tools

Supplement your lookup with our advanced people search database. Search over 400 million profiles.
Phone Detective is not a consumer reporting agency as defined by the Fair Credit Reporting Act (FCRA). By running a search, you agree to use the information for permissible use only, as outlined by the Terms of Use. You cannot use our products as a factor in establishing an individual's eligibility for personal CREDIT or INSURANCE, evaluating an individual for EMPLOYMENT purposes, or any other purpose(s) authorized under section 604 of the federal Fair Credit Reporting Act or similar state statute. For pre-employment screening, visit GoodHire and be sure to familiarize yourself with the legal requirements for employers (including obtaining permission from the applicant and providing an "adverse action" letter, if appropriate).
A look at Socl--Microsoft's secret 'social search' project Microsoft is apparently ready to mix it up with Facebook and Google.Speculation began to mount that the software giant was getting ready to launch its own social network after it accidentally published a Web site called Socl.com earlier this year. The site, which was found to be a Microsoft project, was described as a "social search" service that would allow users to "find what you need and share what you know."The service offered Facebook and Twitter sign-in buttons, but little else was known about Socl.com. Microsoft soon took the site down, saying it was "an internal design project from one of Microsoft's research teams which was mistakenly published to the Web." Now we have a clearer picture of Socl, thanks to The Verge, which recently got an exclusive look at the service. The site, which is still in private beta testing and may never be released publicly, "mixes search, discovery, and, go figure, a social network," the blog reported.Socl offers a basic three-column layout that is reminiscent of Facebook's design, with navigation tools to the left, a social feed in the center, and invites and other options to the right. Central to the experience is a pseudo status box at the top of the page that asks users "What are you searching for?" Search functionality would presumably be provided by Bing, Microsoft's search engine.The site relies heavily on tagging, allowing users to identify topics they are interested in and receive social updates on those interests. However, The Verve contends that Socl's approach isn't much of an improvement over Google's saved searches function.Socl also touts a video party feature that allows users to chat and view YouTube videos with their friends.While the site is intended to get people interacting more with each other based on their search queries, there is not much in the way of private interaction with other users, such as messaging or @replies. It's unknown when or if Socl will be rolled out publicly.Microsoft already relies heavily on its partnership with social network giant, Facebook. In May, Microsoft unveiled a new feature to its Bing search engine, baking in recommendations from a Web surfer's Facebook friends in order to make the results more relevant.A look into the mind-bending Google Glass of 2029 When Google Glass made its first public appearance on April 4, 2012, it signaled the beginning of a new era of computing. Consider the precedent: In the span ofhalf a decade, the computer moved from the desktop to the pocket, and now with Glass it is moving to the head, on its way to eventually integrating itself inside the human body. Ray Kurzweil, Google's director of engineering, calls Glass a "solid first step" along the road to computers that rival and then exceed human intelligence. Kurzweil, who is also an accomplished inventor and futurist, predicts that by 2029 computers will match human intelligence, and nanobots inhabiting our brains will create immersive virtual reality environments from within our nervous systems:If you want to go into virtual reality the nanobots shut down the signals coming from your real senses and replace them with the signals that your brain would be receiving if you were actually in the virtual environment. So this will provide full-immersion virtual reality incorporating all of the senses. You will have a body in these virtual-reality environments that you can control just like your real body, but it does not need to be the same body that you have in real reality. We'll be able to interact with people in any way in these virtual-reality environments. That will replace most travel, but we'll also have new travel technologies for our real bodies using nanotechnology.As a Google Director of Engineering Ray Kurzweil is working on improving computer understanding of natural language. As Ray Kurzweil the author of'The Singularity Is Near: When Humans Transcend Biology,' he is working to reverse engineer the human brain.Bloomberg via Getty ImagesFurther down the road people will be uploading their entire brains to computers, Kurzweil said. The human brain will gain additional thinking power, expanding the neocortex into the compute cloud in the 2030s, Kurzweil said, accessing trillions of new concepts and experiences at speeds much faster than the biological brain. The fusion of digital and biological parts will enable a qualitative leap for humans based on a quantitative expansion of thinking, according to Kurzweil.It's not clear whether Google's co-founders fully buy into Kurzweil's view of technology evolution or his notion of "Singularity," a prediction that around 2045 intelligence will become more nonbiological and trillions of times more powerful, and any distinction between humans and machines, so-called reality and virtual reality will be erased. But, it wouldn't be out of character for Google co-founders Larry Page and Sergey Brin to consider moon shots like Google's servers with direct and assistive connections to your brain, as they have for self-driving cars. It's mind bending to think about the implications, but it seems possible that Google could monetize your brain instantaneously as it thinks. Google's Sergey Brin is personally funding the development of in-vitro, lab-grown beef.Bloomberg via Getty Images/CNET/David Parry, PA WireHunger pangs? Google's brain, cohabiting with your bio-brain, immediately flashes images of food, optimized for your health and eating pleasure, based on data from the sensors capturing your vital signs, data from anonymized individuals with similar profiles, your refrigerator's contents and super-targeted ad inventory. The image that elicited the biggest autonomic response is ordered from a local eatery, or if you are part of the DIY movement, it will display a recipe with preparation instructions from your tiny Glass eye embedded in your retina or visual cortex. Alternatively, it could be prepared by a robot or even formulated on the spot from base chemistry by nanobots. Google receives payment for various contextual ads and offers that are part of the human-computer data flow across the indistinguishable virtual and real worlds.Biologically inspired software?Coming back to the present, Kurzweil's tenure at Google to date doesn't yet appear to include merging the human brain with the Google cloud or creating a future version of Glass the size of a blood cell that runs through your brain capillaries. He came to Google late last year with the more modest charter of improving Google computers' understanding of natural language, which is a prerequisite for artificially intelligent computers that pass for human. It's part of a Google's effort to move to "conversational search," where it's possible to have speech as the primary input for a device."We are developing software that is biologically inspired and uses the lessons that biological evolution learned in evolving the human brain and neocortex to create intelligent machines," Kurzweil said.Google has a well-established research program for developing artificial intelligence. Applying design principles from neural networks, Google engineers realized significant improvements in the quality of the speech recognition. Google has also built a large data repository, Knowledge Graph, with nearly a billion objects and billions of relationships among them as a foundation for understanding the semantic content and context of queries."Knowledge Graph has good coverage of people, places, things, and events, but there is plenty it doesn't know about. We are at 1 percent," John Giannandrea, director of engineering for the repository, told CNET. Jeff Dean has been involved in many of Google's key technology projects during his 14 years at the company.Stephen Shankland/CNETWhile Kurzweil and Google have moon shot ambitions for the future of Glass, it will enter a mode of incremental improvements over the next half decade. Smartphones over the last five years have become far more capable, powerful and popular each year, following the cadence of Moore's Law, but there has been no quantum leap. Over the next few years, Glass also faces a tougher adoption curve than smartphones, which are more essential for users than the wearable accessory. For Glass to break through, natural language input and conversational search need to make quantum leaps. Google Fellow Jeff Dean says that voice search and image recognition will substantially improve the next five years. "If you're using Google Glass, it's going to be able to look around and read all the text on signs and do background lookups on additional information and serve that. That will be pretty exciting," Dean said in an interview with TechFlash.However, Google's brain needs to have a better understanding of natural language, which part of Kurzweil's mandate. "If we could get to the point where we understand sentences, that will really be quite powerful," Dean said. "So if two sentences mean the same thing but are written very differently, and we are able to tell that, that would be really powerful. Because then you do sort of understand the text at some level because you can paraphrase it." A problem for search engines today is that much of the data isn't "labeled," Dean said. It doesn't offer much data to describe itself in a way would make it easier for a search engine to catalog. In addition, answers to more complicated queries require stitching together pieces of data from wildly disparate sources. For example, a Web page doesn't exist to answer the question, "What's the Google engineering office with the highest average temperature?,"Dean told TechFlash. "There's no Web page that has that data on it. But if you know a page that has all the Google offices on it, and you know how to find historical temperature data, you can answer that question. But making the leap to being able to manipulate that data to answer the question depends fundamentally on actually understanding what the data is."Nor does Google's brain know how to book your vacation or business trip. "That's a very high-level set of instructions. And if you're a human, you'd ask me a bunch of follow-up questions, 'What hotel do you want to stay at?' 'Do you mind a layover?' - that sort of thing," Dean said. "I don't think we have a good idea of how to break it down into a set of follow-up questions to make a manageable process for a computer to solve that problem. The search team often talks about this as the 'conversational search problem.'"Google isn't yet talking about bringing Glass into the augmented reality world of 3D and virtual reality. At present, it can take videos and pictures, send a tweet and provide notifications, but will likely enter the augmented reality realm within next five years, especially as the cost and size of processors, sensors and other components come down and the power increases. Startups such as Meta are getting a head start on Google. With the next two years, the company expects to ship augmented-reality glasses that combine the power of a laptop and smartphone in a pair of stylish frames that map gesture-controlled virtual objects into the physical world, similar to the movie portrayals of app control via gestures in "Iron Man" and "Avatar."But even Google Glass with 3D, augmented reality and vastly improved conversational search is still a primitive toy in Kurzweil's long view. "We'll make ourselves a billion times smarter by 2045," Kurzweil says. In a 30-year span, computing has progressed from the Macintosh, which launched in 1984, to Google Glass. A moon shot traversing from today's Google Glass to nanobots communicating between your brain and a Google cloud that is indistinguishable from a human in the next 15 to 30 years is difficult to digest, but not that far fetched.