The colourful makarapa – headgear thatstarts off humbly as a miner’s helmet,and is lovingly and completely transformedinto an essential item for the SouthAfrican football fan.(Image: Chris Kirchhoff,MediaClubSouthAfrica.com. For more freephotos, visit the image library.)Find out more about using MediaClubSouthAfrica.com materialFlying in from Canada to experience African football first-hand, Peter Severinac, from Ontario, was blown away by the electric atmosphere inside the Royal Bafokeng Stadium in Rustenburg on Wednesday evening, when Bafana Bafana took on New Zealand in a 2009 Fifa Confederations Cup match.Surrounded by thousands of celebrating South African fans, all making their way out of the stadium, Severniac could hardly contain his amazement at what he experienced during the game.“Those trumpets are great, I have never experienced anything like it,” Severinac said, referring to the air horns (vuvuzelas) that distinguish football matches in South Africa from anywhere else in the world. “Nothing compares to the feeling of being in the stands with all that dancing and noise.“They treated me like a member of their family when they found out I was from outside South Africa. I will definitely be back next year for the World Cup, and will bring back as many friends as I can,” said Severinac before he was swallowed up by the moving crowd.Once-in-a-lifetime experienceBenito Lenon, who travelled from Madrid, Spain, to watch La Furia Roja play in Fifa’s “Festival of Continental Champions”, said South Africa “seems like such a good country.“I have been here for six days now watching football, and I really love the friendly people here,” Lenon said. “I must tell you, South Africans are the most friendly and hospitable people I have met.”Although Spain were playing Iraq in Bloemfontein on the same day, Lenon chose to experience a Bafana Bafana match in Rustenburg instead, having heard from friends that it was a once-in-a-lifetime experience to celebrate football in South Africa.He certainly wasn’t disappointed.As tens of thousands of spectators made their way to the Royal Bafokeng Stadium before the game, the city of Rustenburg, in South Africa’s North West province, came alive with the sound of a distinctively African Fifa Confederations Cup.The drone of vuvuzelas competed with hooting and cheering as fans made their way toward the stadium through the city, hoping for – and getting – a night of celebration as South Africa beat New Zealand 2-0.Hours before the match had even started, crowds were gathering outside the stadium as music and dancers entertained the excited crowd.“I am here to support my country, and the vibe around the city is great,” said Lebogang Molefe, adding that the Confederations Cup was all about showing the world what South Africa is all about.“We are a nation that likes to sing, and we are a happy nation,” Molefe said. “I hope our visitors see this now and on television, and I hope they come back for the 2010 World Cup.”Source: 2010 Fifa World Cup South Africa Organising CommitteeRelated storiesViva the vuvuzela orchestra!Big teams qualify for World CupUseful linksMakarapa 2009 Fifa Confederations Cup South Africa 2010
1 December 2010As countries across the world commemorate World Aids Day, the South African Medical Association has called on the public to get tested for HIV, and to encourage others to do the same.“We call on the public to continue to be tested and influence others to test for HIV as a routine way of ensuring healthy lifestyle choices, irrespective of status, as HIV and Aids does not discriminate,” SA Medical Association (SAMA) chairman Norman Mabasa said in a statement on Wednesday.In April, President Jacob Zuma launched an HIV, Counselling and Testing campaign, which aims to get 15-million South Africans tested by June 2011.To date, 4.9-million people have reported for counselling and testing in the country’s health facilities.SAMA has also urged all doctors to reaffirm their important role in addressing the epidemic through encouraging all patients to test and be treated if necessary, and to provide ongoing education about the importance of safe sexual practices.“Collective responsibility will reduce the spread of HIV,” Mabasa said.Under the theme “We Are Responsible”, South Africa’s 2010 World Aids Day campaign encourages people to show collective responsibility to encourage and support partners, family and community members to test voluntarily for HIV and to set an example for others by leading healthier lifestyles.Build-up activities for World Aids Day started in November with a series of dialogues between the government and its social partners. These social dialogues will culminate in various Cabinet ministers, deputy ministers, premiers and MECs being deployed to various communities across the country today to discuss ways of reducing new HIV infections.Deputy President Kgalema Motlanthe, who is also chairman of the South African National Aids Council, will lead the way, visiting families and addressing community members and health care workers in Driefontein in Mpumalanga province.Source: BuaNews
In this web-enabled world of ours, you have to wonder why business cards are still so popular. Shouldn’t there be a better way? A number of startups have attempted to address this problem with ingenious solutions that range from iPhone apps to custom URLs. Others are calling for the use of QR Codes for mobile data exchange. Unfortunately, no one service has hit the sweet spot just yet, but newcomer “E” thinks they have it figured out. Will “E” succeed where the others have failed? Or is this one industry that refuses to become digitized?HelloMyNameIsE.com You have to appreciate E’s creative URL – it’s memorable, but also makes you curious. E? What’s E?, you wonder. When I first encountered the URL, it was in a tweet which read “I’m now using E to add friends to my Twitter account. More info on http://hellomynameise.com.” Did I click though? You bet. “E,” as it turns out, is a new spin on digital contact exchange. Instead of using paper business cards, you use your phone to exchange data. At first, you may think that sounds very much like mobile contact service Dropcard, but it’s not. The only similarity between E and Dropcard is that they both allow you to customize your profile online and share it with others, but the similarities end there. To use Dropcard, you either text or use a mobile app which emails your contact info to the person you just met. With E, you go to a mobile web URL that lets you exchange a passcode with your new contact. The passcode is simply a five-digit code which is entered into the mobile web app itself. They show your theirs, you show them yours…that sort of thing. Once connected, you don’t receive an email message with their contact info like with Dropcard. E goes a step further and actually adds that contact to all the services you’ve already integrated with E. Service IntegrationAt the moment, E allows you to integrate Twitter, PICNIC (a network for the PICNIC conference), and Soocial. However, Delicious, European social portal Netlog, and LastFM are listed as coming soon. After you integrate these services with E, when you add a contact they’re immediately added to all those other web services, too. And thanks to Soocial, an address book solution, E contact info can also synchronize with your email address book in Gmail, Highrise, your OSX address book, or the address book on your phone itself. Barriers To AdoptionE faces one of the typical problems that many web 2.0 startups do – they don’t work for you until a lot of people are using it. Just because you have a profile on E, that doesn’t mean that those you meet do. And unlike a service like Dropcard, there isn’t a way to use E without the other person’s involvement. In addition to the service itself, the developers of E came up with a crazy but interesting idea for a hardware device called the “Connector.” With this device, you can exchange contact info with others just by touching the two connectors together. While gadget junkies and shiny object collectors may find this device appealing, it could easily remain a niche gadget that ends up sitting on the shelf next to your Chumby and Nazbaztag. To cross the adoption barrier, those at E would be smart to sponsor events where everyone gets a Connector at registration. After a few high-profile events, they would have industry movers and shakers on board, and that’s always a good place to start. Sponsoring events may be just what the company is planning, though, since their site mentions that the “Connector will be released at large events in the near future.” Will It Work?At present, the E service is very basic. Twitter integration is the only service of note that works yet. (Soocial looks great, but is in private beta). The profiles themselves are also not as flexible as those with Dropcard are. You can easily add and remove services with Dropcard, but with E, I wasn’t even able to add a second company that represents my second job. The services section of the web site is confusing – it doesn’t allow you to do anything more than customize which services are connected. The actual profile information is entered under “Settings,” so you can’t specify that only personal contacts get your home address, for example. It appears to be all-or-nothing. E still has far to go to become a truly successful digital contact exchange service, but at least they’re trying something different. Because they operate via mobile URL, not an app specific to any one device, they’re better positioned for more universal adoption that a service that designates itself as iPhone-only, for example. The service is in private beta testing now, but you have the opportunity to make an impassioned plea as to why they should invite you on the signup page here. (If you get in, feel free to add me: 17975.)Check out the video below to see E in action:Hello, my name is E from Renato Valdés Olmos on Vimeo. Related Posts Why Tech Companies Need Simpler Terms of Servic… Tags:#Features#Product Reviews#web Top Reasons to Go With Managed WordPress Hosting A Web Developer’s New Best Friend is the AI Wai… sarah perez 8 Best WordPress Hosting Solutions on the Market
Since May, SharePoint 2010 has been out in the wild. It has been a big business for Microsoft. AIIM now estimates that SharePoint sales topped $1.3 billion in 2008 and is growing at an annual rate of 25 percent.Different reports made by AIIM this year find that anywhere from 74 percent to 98 percent of organizations are planning to try SharePoint. But how are organizations using it? The AIIM report found that:47 percent use it for file sharingSharePoint is used infrequently for complex business processes, records management or digital asset managementSharePoint is typically viewed as just one component of a larger Enterprise Content Management (ECM) strategyAnn All quoted Rob Helm, managing VP of research at Directions on Microsoft, as sayting that “Microsoft is going to continue to pull SharePoint through as an infrastructure. It may never be as dominant as Office, but it may be like SQL Server, Microsoft’s database product. If you’re on the Microsoft stack, you almost can’t avoid it.”The low price point to get started and the ready availability of the software have led to many organizations starting to use SharePoint, but really without any plan for what to do with it. In fact, the AIIM report pointed out that governance is one of the biggest issues around SharePoint. AIIM found that 60 percent of organizations use SharePoint with no modifications out the box. Of the more than 50 percent of the organizations that are using SharePoint, the decision to use it was made without a formal business case. In those organizations using SharePoint, just 22 percent provide any guidance to their workers about document types and classification, and less than 15 percent have any notion about retention policies and legal discovery procedures.For those organizations with an upfront plan for what they want to accomplish with SharePoint, AIIM found that getting SharePoint to do things the way that their organization wants often isn’t as easy as they expected. 30 percent said that development time and effort required to make SharePoint business ready was a big challenge for them. About one-thrid of the respondents said their SharePoint projects took them longer to implement than expected, and 21 percent complained that the SharePoint interface was not intuitive or easy to use.
kim gaskins A Web Developer’s New Best Friend is the AI Wai… Related Posts Tags:#E-Learning#web 8 Best WordPress Hosting Solutions on the Market The research consultancy Latitude recently completed a multi-phase innovation study, Children’s Future Requests for Computers and the Internet, which was published in collaboration with ReadWriteWeb. The study asked more than 200 kid-innovators across the world, ages 12 and under, to draw the answer to this question: “What would you like your computer or the Internet to do that it can’t do right now?”You can explore the full findings here: Part 1: Kids are the Road to Tech InnovationPart 2: From the Mouths of Babes: The Future of Tech is Robots and Real World IntegrationDownload the study summary (PDF) here.By and large, kids indicated that they’d like future technology to fulfill three primary functions:1. Serve as an extension of themselves, with more fluid and intelligent modes of interaction“Help Computer: it knows what you are thinking and does it for you – both touch and voice controlled.” – Male, 8, Brisbane, Australia2. Seamlessly integrate digital objects, places and experiences with the real, physical world“I’d like it if my computer could convert images or food and make them real.” – Female, 10, Pakenham, Australia3. Empower users by conferring new knowledge or abilities and unlocking new experiences“I want to video kids on the other side of the world using a different kind of language.” – Female, 7, Warwick, RI, United States“These three expectations are especially powerful when viewed together as part of a larger framework, because they speak to the way that kids are perceiving themselves in relation to the world – and what’s possible in it. Essentially: if devices are an extension of one’s self, and these devices are increasingly integrated with the physical world, it follows that technology is a gateway to expanding our own experiences with and confidence in the world at large,” says Neela Sakaria, Senior Vice President at Latitude. “Technology is no longer an end in itself – instead, it becomes a path to more meaningful experiences with our surroundings. Kids are naturally intuiting this, and we as adults are following closely behind,” she adds. To illustrate the study’s high-level findings and how they interrelate, Latitude created this infographic:View larger size here.Infographic created by Latitude in collaboration with FFunction, (cc) some rights reserved.Latitude is proud to have partnered with ReadWriteWeb on phase 1 of “Children’s Future Requests for Computers and the Internet.” Latitude is an international research consultancy helping clients create engaging content, software and technology that harness the possibilities of the Web. To learn more about working with Latitude, fill out this form or contact Ian Schulte (firstname.lastname@example.org).Image credit: Marcus Kwan Why Tech Companies Need Simpler Terms of Servic… Top Reasons to Go With Managed WordPress Hosting
GRAPHIC: G. GRULLÓN/SCIENCE By Paul VoosenJul. 6, 2017 , 2:00 PM Jason Yosinski sits in a small glass box at Uber’s San Francisco, California, headquarters, pondering the mind of an artificial intelligence. An Uber research scientist, Yosinski is performing a kind of brain surgery on the AI running on his laptop. Like many of the AIs that will soon be powering so much of modern life, including self-driving Uber cars, Yosinski’s program is a deep neural network, with an architecture loosely inspired by the brain. And like the brain, the program is hard to understand from the outside: It’s a black box. This particular AI has been trained, using a vast sum of labeled images, to recognize objects as random as zebras, fire trucks, and seat belts. Could it recognize Yosinski and the reporter hovering in front of the webcam? Yosinski zooms in on one of the AI’s individual computational nodes—the neurons, so to speak—to see what is prompting its response. Two ghostly white ovals pop up and float on the screen. This neuron, it seems, has learned to detect the outlines of faces. “This responds to your face and my face,” he says. “It responds to different size faces, different color faces.”No one trained this network to identify faces. Humans weren’t labeled in its training images. Yet learn faces it did, perhaps as a way to recognize the things that tend to accompany them, such as ties and cowboy hats. The network is too complex for humans to comprehend its exact decisions. Yosinski’s probe had illuminated one small part of it, but overall, it remained opaque. “We build amazing models,” he says. “But we don’t quite understand them. And every year, this gap is going to get a bit larger.”Sign up for our daily newsletterGet more great content like this delivered right to you!Country *AfghanistanAland IslandsAlbaniaAlgeriaAndorraAngolaAnguillaAntarcticaAntigua and BarbudaArgentinaArmeniaArubaAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelarusBelgiumBelizeBeninBermudaBhutanBolivia, Plurinational State ofBonaire, Sint Eustatius and SabaBosnia and HerzegovinaBotswanaBouvet IslandBrazilBritish Indian Ocean TerritoryBrunei DarussalamBulgariaBurkina FasoBurundiCambodiaCameroonCanadaCape VerdeCayman IslandsCentral African RepublicChadChileChinaChristmas IslandCocos (Keeling) IslandsColombiaComorosCongoCongo, The Democratic Republic of theCook IslandsCosta RicaCote D’IvoireCroatiaCubaCuraçaoCyprusCzech RepublicDenmarkDjiboutiDominicaDominican RepublicEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEthiopiaFalkland Islands (Malvinas)Faroe IslandsFijiFinlandFranceFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabonGambiaGeorgiaGermanyGhanaGibraltarGreeceGreenlandGrenadaGuadeloupeGuatemalaGuernseyGuineaGuinea-BissauGuyanaHaitiHeard Island and Mcdonald IslandsHoly See (Vatican City State)HondurasHong KongHungaryIcelandIndiaIndonesiaIran, Islamic Republic ofIraqIrelandIsle of ManIsraelItalyJamaicaJapanJerseyJordanKazakhstanKenyaKiribatiKorea, Democratic People’s Republic ofKorea, Republic ofKuwaitKyrgyzstanLao People’s Democratic RepublicLatviaLebanonLesothoLiberiaLibyan Arab JamahiriyaLiechtensteinLithuaniaLuxembourgMacaoMacedonia, The Former Yugoslav Republic ofMadagascarMalawiMalaysiaMaldivesMaliMaltaMartiniqueMauritaniaMauritiusMayotteMexicoMoldova, Republic ofMonacoMongoliaMontenegroMontserratMoroccoMozambiqueMyanmarNamibiaNauruNepalNetherlandsNew CaledoniaNew ZealandNicaraguaNigerNigeriaNiueNorfolk IslandNorwayOmanPakistanPalestinianPanamaPapua New GuineaParaguayPeruPhilippinesPitcairnPolandPortugalQatarReunionRomaniaRussian FederationRWANDASaint Barthélemy Saint Helena, Ascension and Tristan da CunhaSaint Kitts and NevisSaint LuciaSaint Martin (French part)Saint Pierre and MiquelonSaint Vincent and the GrenadinesSamoaSan MarinoSao Tome and PrincipeSaudi ArabiaSenegalSerbiaSeychellesSierra LeoneSingaporeSint Maarten (Dutch part)SlovakiaSloveniaSolomon IslandsSomaliaSouth AfricaSouth Georgia and the South Sandwich IslandsSouth SudanSpainSri LankaSudanSurinameSvalbard and Jan MayenSwazilandSwedenSwitzerlandSyrian Arab RepublicTaiwanTajikistanTanzania, United Republic ofThailandTimor-LesteTogoTokelauTongaTrinidad and TobagoTunisiaTurkeyTurkmenistanTurks and Caicos IslandsTuvaluUgandaUkraineUnited Arab EmiratesUnited KingdomUnited StatesUruguayUzbekistanVanuatuVenezuela, Bolivarian Republic ofVietnamVirgin Islands, BritishWallis and FutunaWestern SaharaYemenZambiaZimbabweI also wish to receive emails from AAAS/Science and Science advertisers, including information on products, services and special offers which may include but are not limited to news, careers information & upcoming events.Required fields are included by an asterisk(*)Each month, it seems, deep neural networks, or deep learning, as the field is also called, spread to another scientific discipline. They can predict the best way to synthesize organic molecules. They can detect genes related to autism risk. They are even changing how science itself is conducted. The AIs often succeed in what they do. But they have left scientists, whose very enterprise is founded on explanation, with a nagging question: Why, model, why?That interpretability problem, as it’s known, is galvanizing a new generation of researchers in both industry and academia. Just as the microscope revealed the cell, these researchers are crafting tools that will allow insight into the how neural networks make decisions. Some tools probe the AI without penetrating it; some are alternative algorithms that can compete with neural nets, but with more transparency; and some use still more deep learning to get inside the black box. Taken together, they add up to a new discipline. Yosinski calls it “AI neuroscience.” Mark Riedl, Georgia Institute of Technology Like many AI coders, Mark Riedl, director of the Entertainment Intelligence Lab at the Georgia Institute of Technology in Atlanta, turns to 1980s video games to test his creations. One of his favorites is Frogger, in which the player navigates the eponymous amphibian through lanes of car traffic to an awaiting pond. Training a neural network to play expert Frogger is easy enough, but explaining what the AI is doing is even harder than usual.Instead of probing that network, Riedl asked human subjects to play the game and to describe their tactics aloud in real time. Riedl recorded those comments alongside the frog’s context in the game’s code: “Oh, there’s a car coming for me; I need to jump forward.” Armed with those two languages—the players’ and the code—Riedl trained a second neural net to translate between the two, from code to English. He then wired that translation network into his original game-playing network, producing an overall AI that would say, as it waited in a lane, “I’m waiting for a hole to open up before I move.” The AI could even sound frustrated when pinned on the side of the screen, cursing and complaining, “Jeez, this is hard.”Riedl calls his approach “rationalization,” which he designed to help everyday users understand the robots that will soon be helping around the house and driving our cars. “If we can’t ask a question about why they do something and get a reasonable response back, people will just put it back on the shelf,” Riedl says. But those explanations, however soothing, prompt another question, he adds: “How wrong can the rationalizations be before people lose trust?” Marco Ribeiro, a graduate student at the University of Washington in Seattle, strives to understand the black box by using a class of AI neuroscience tools called counter-factual probes. The idea is to vary the inputs to the AI—be they text, images, or anything else—in clever ways to see which changes affect the output, and how. Take a neural network that, for example, ingests the words of movie reviews and flags those that are positive. Ribeiro’s program, called Local Interpretable Model-Agnostic Explanations (LIME), would take a review flagged as positive and create subtle variations by deleting or replacing words. Those variants would then be run through the black box to see whether it still considered them to be positive. On the basis of thousands of tests, LIME can identify the words—or parts of an image or molecular structure, or any other kind of data—most important in the AI’s original judgment. The tests might reveal that the word “horrible” was vital to a panning or that “Daniel Day Lewis” led to a positive review. But although LIME can diagnose those singular examples, that result says little about the network’s overall insight.New counterfactual methods like LIME seem to emerge each month. But Mukund Sundararajan, another computer scientist at Google, devised a probe that doesn’t require testing the network a thousand times over: a boon if you’re trying to understand many decisions, not just a few. Instead of varying the input randomly, Sundararajan and his team introduce a blank reference—a black image or a zeroed-out array in place of text—and transition it step-by-step toward the example being tested. Running each step through the network, they watch the jumps it makes in certainty, and from that trajectory they infer features important to a prediction.Sundararajan compares the process to picking out the key features that identify the glass-walled space he is sitting in—outfitted with the standard medley of mugs, tables, chairs, and computers—as a Google conference room. “I can give a zillion reasons.” But say you slowly dim the lights. “When the lights become very dim, only the biggest reasons stand out.” Those transitions from a blank reference allow Sundararajan to capture more of the network’s decisions than Ribeiro’s variations do. But deeper, unanswered questions are always there, Sundararajan says—a state of mind familiar to him as a parent. “I have a 4-year-old who continually reminds me of the infinite regress of ‘Why?’”The urgency comes not just from science. According to a directive from the European Union, companies deploying algorithms that substantially influence the public must by next year create “explanations” for their models’ internal logic. The Defense Advanced Research Projects Agency, the U.S. military’s blue-sky research arm, is pouring $70 million into a new program, called Explainable AI, for interpreting the deep learning that powers drones and intelligence-mining operations. The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California. When she joined Google in 2012 and asked AI engineers about their problems, accuracy wasn’t the only thing on their minds, she says. “I’m not sure what it’s doing,” they told her. “I’m not sure I can trust it.”Rich Caruana, a computer scientist at Microsoft Research in Redmond, Washington, knows that lack of trust firsthand. As a graduate student in the 1990s at Carnegie Mellon University in Pittsburgh, Pennsylvania, he joined a team trying to see whether machine learning could guide the treatment of pneumonia patients. In general, sending the hale and hearty home is best, so they can avoid picking up other infections in the hospital. But some patients, especially those with complicating factors such as asthma, should be admitted immediately. Caruana applied a neural network to a data set of symptoms and outcomes provided by 78 hospitals. It seemed to work well. But disturbingly, he saw that a simpler, transparent model trained on the same records suggested sending asthmatic patients home, indicating some flaw in the data. And he had no easy way of knowing whether his neural net had picked up the same bad lesson. “Fear of a neural net is completely justified,” he says. “What really terrifies me is what else did the neural net learn that’s equally wrong?”Today’s neural nets are far more powerful than those Caruana used as a graduate student, but their essence is the same. At one end sits a messy soup of data—say, millions of pictures of dogs. Those data are sucked into a network with a dozen or more computational layers, in which neuron-like connections “fire” in response to features of the input data. Each layer reacts to progressively more abstract features, allowing the final layer to distinguish, say, terrier from dachshund.At first the system will botch the job. But each result is compared with labeled pictures of dogs. In a process called backpropagation, the outcome is sent backward through the network, enabling it to reweight the triggers for each neuron. The process repeats millions of times until the network learns—somehow—to make fine distinctions among breeds. “Using modern horsepower and chutzpah, you can get these things to really sing,” Caruana says. Yet that mysterious and flexible power is precisely what makes them black boxes. A new breed of scientist, with brains of silicon Special package: AI in science Opening up the black box Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes. Scientists are now developing tools to get inside the mind of the machine. How AI detectives are cracking open the black box of deep learning First, Yosinski rejiggered the classifier to produce images instead of labeling them. Then, he and his colleagues fed it colored static and sent a signal back through it to request, for example, “more volcano.” Eventually, they assumed, the network would shape that noise into its idea of a volcano. And to an extent, it did: That volcano, to human eyes, just happened to look like a gray, featureless mass. The AI and people saw differently.Next, the team unleashed a generative adversarial network (GAN) on its images. Such AIs contain two neural networks. From a training set of images, the “generator” learns rules about imagemaking and can create synthetic images. A second “adversary” network tries to detect whether the resulting pictures are real or fake, prompting the generator to try again. That back-and-forth eventually results in crude images that contain features that humans can recognize.Yosinski and Anh Nguyen, his former intern, connected the GAN to layers inside their original classifier network. This time, when told to create “more volcano,” the GAN took the gray mush that the classifier learned and, with its own knowledge of picture structure, decoded it into a vast array of synthetic, realistic-looking volcanoes. Some dormant. Some erupting. Some at night. Some by day. And some, perhaps, with flaws—which would be clues to the classifier’s knowledge gaps.Their GAN can now be lashed to any network that uses images. Yosinski has already used it to identify problems in a network trained to write captions for random images. He reversed the network so that it can create synthetic images for any random caption input. After connecting it to the GAN, he found a startling omission. Prompted to imagine “a bird sitting on a branch,” the network—using instructions translated by the GAN—generated a bucolic facsimile of a tree and branch, but with no bird. Why? After feeding altered images into the original caption model, he realized that the caption writers who trained it never described trees and a branch without involving a bird. The AI had learned the wrong lessons about what makes a bird. “This hints at what will be an important direction in AI neuroscience,” Yosinski says. It was a start, a bit of a blank map shaded in.The day was winding down, but Yosinski’s work seemed to be just beginning. Another knock on the door. Yosinski and his AI were kicked out of another glass box conference room, back into Uber’s maze of cities, computers, and humans. He didn’t get lost this time. He wove his way past the food bar, around the plush couches, and through the exit to the elevators. It was an easy pattern. He’d learn them all soon. Gupta has a different tactic for coping with black boxes: She avoids them. Several years ago Gupta, who moonlights as a designer of intricate physical puzzles, began a project called GlassBox. Her goal is to tame neural networks by engineering predictability into them. Her guiding principle is monotonicity—a relationship between variables in which, all else being equal, increasing one variable directly increases another, as with the square footage of a house and its price. Gupta embeds those monotonic relationships in sprawling databases called interpolated lookup tables. In essence, they’re like the tables in the back of a high school trigonometry textbook where you’d look up the sine of 0.5. But rather than dozens of entries across one dimension, her tables have millions across multiple dimensions. She wires those tables into neural networks, effectively adding an extra, predictable layer of computation—baked-in knowledge that she says will ultimately make the network more controllable.Caruana, meanwhile, has kept his pneumonia lesson in mind. To develop a model that would match deep learning in accuracy but avoid its opacity, he turned to a community that hasn’t always gotten along with machine learning and its loosey-goosey ways: statisticians.In the 1980s, statisticians pioneered a technique called a generalized additive model (GAM). It built on linear regression, a way to find a linear trend in a set of data. But GAMs can also handle trickier relationships by finding multiple operations that together can massage data to fit on a regression line: squaring a set of numbers while taking the logarithm for another group of variables, for example. Caruana has supercharged the process, using machine learning to discover those operations—which can then be used as a powerful pattern-detecting model. “To our great surprise, on many problems, this is very accurate,” he says. And crucially, each operation’s influence on the underlying data is transparent.Caruana’s GAMs are not as good as AIs at handling certain types of messy data, such as images or sounds, on which some neural nets thrive. But for any data that would fit in the rows and columns of a spreadsheet, such as hospital records, the model can work well. For example, Caruana returned to his original pneumonia records. Reanalyzing them with one of his GAMs, he could see why the AI would have learned the wrong lesson from the admission data. Hospitals routinely put asthmatics with pneumonia in intensive care, improving their outcomes. Seeing only their rapid improvement, the AI would have recommended the patients be sent home. (It would have made the same optimistic error for pneumonia patients who also had chest pain and heart disease.)Caruana has started touting the GAM approach to California hospitals, including Children’s Hospital Los Angeles, where about a dozen doctors reviewed his model’s results. They spent much of that meeting discussing what it told them about pneumonia admissions, immediately understanding its decisions. “You don’t know much about health care,” one doctor said, “but your model really does.”Sometimes, you have to embrace the darkness. That’s the theory of researchers pursuing a third route toward interpretability. Instead of probing neural nets, or avoiding them, they say, the way to explain deep learning is simply to do more deep learning. If we can’t ask … why they do something and get a reasonable response back, people will just put it back on the shelf. AI is changing how we do science. Get a glimpse Researchers have created neural networks that, in addition to filling gaps left in photos, can identify flaws in an artificial intelligence. PHOTOS: ANH NGUYEN Back at Uber, Yosinski has been kicked out of his glass box. Uber’s meeting rooms, named after cities, are in high demand, and there is no surge pricing to thin the crowd. He’s out of Doha and off to find Montreal, Canada, unconscious pattern recognition processes guiding him through the office maze—until he gets lost. His image classifier also remains a maze, and, like Riedl, he has enlisted a second AI to help him understand the first one.
Audi, the German luxury car maker, has introduced the A5 Brat Pack in India with the launch of the A5 Sportback, the Audi A5 Cabriolet and the Audi S5 Sportback. The powerful and flowing silhouettes of the three offerings make the trio truly aesthetic with abundant space inside. Beneath their skin, notably impressive traits include a newly developed suspension, high-performance drives, innovative infotainment solutions, Audi Virtual Cockit and Audi Smartphone Interface.While the A5 Sportback has been priced at Rs 54,02,000, the S5 Sportback has been priced at Rs 70,60,000 and the A5 Cabriolet can be had for Rs 67,51,000.With the A5 Sportback, Audi has filled in the vacuum between the A4 and A6 in the model line up.The S5 Sportback, the most expensive of the lot, comes with a turbocharged V6 petrol engine.Here’s a glance at what all the three cars come packed with:Engines and drivetrainAudi A5 Sportback2.0 litre TDI engine churns out 140 kW (190 hp) of power0 to 100 km/h (62.1 mph) in 7.9 secondsSeven-Speed S tronic dual-clutch transmission Audi S5 Sportback3.0 litre TFSIq engine churns out 260 kW (354 hp) of powerV6 turbo engine, 0 to 100 km/h (62.1 mph) in 4.7 secondsEight-Speed tiptronic, quattro all-wheel drive is standardAudi A5 Cabriolet2.0 litre TDIq engine churns out 140 kW (190 hp) of power0 to 100 km/h (62.1 mph) in 7.8 secondsSeven-Speed S tronic dual-clutch transmission,Quattro all-wheel drive is standard ExteriorAthletic design and outstanding aeroacousticsWave-pattern shoulder line imparts emotional eleganceFlatter and wider three-dimensionally modeledSingleframe grilleLED headlights with ‘4 Eye design’LED rear lights with dynamic indicatorsPronounced bulges over the wheel arches underscore the sporty DNAPanoramic sunroof in the Audi A5 Sportback and Audi S5 SportbackNew one-touch opening function opens soft top fully automatic in 15 seconds or close it in 18 seconds – even while driving up to 50 km/h (A5 Cabriolet)Acoustic Hood (A5 Cabriolet)45.72 cms (R18) Cast Aluminium alloy wheels, 5 spoke design Interior Horizontal architecture of the instrument panel creates a sense of spaciousnessOptional ambient lighting with 3 Colors and 30 combinations480 liters (17.0 cu ft) of luggage capacityLeather steering wheel in 3-spoke design with multifunction plus and gear shift paddles ComfortFront seats, electrically adjustableDriver Side memory function on the Audi S5Seat Upholstery in Leather/Leatherette on the Audi A5Seat Upholstery in Alcantara on the Audi S54 Way Lumbar SupportAuto-release function3-Zone deluxe automatic air conditioningInterior mirrors with automatic anti-glare actionExterior mirrors, electrically adjustable, heated, folded, automatic dimmingLuggage compartment lid, electrically opening and closing (Audi A5 & S5)InfotainmentAudi’s MMI Navigation plus with MMI TouchAudi Virtual Cockpit as standard, high-resolution TFT monitor 31.2 cms (12.3 inches)10 GB of flash storage, 21.08 cms (8.3 inch) monitor with a resolution of 1,024 x 480 pixelsOptional Audi Smartphone Interface Optional Bang & Olufsen Sound System with innovative 3D sound SafetyFull-size 6 airbags (on A5 Sportback)Full Size 4 Airbags on (A5 Cabriolet)Full Size 8 Airbags on (Audi S5)ABS, EBD & Traction ControlCruise ControlParking Aid Plus with Rear View CameraTire pressure monitoring displayISOFIX child seat mountingAnti-theft wheel boltsPerformanceQuattro with self-locking center differential (On Audi A5 Cabriolet & Audi S5)Audi Drive SelectNew Electromechanical Power SteeringNewly developed S sport suspension (Audi S5)Optional Sports Differential (Audi S5)advertisementThe A5 Sportback and the A5 Cabriolet get the same 2-litre engine, mated to a 7-speed Tiptronic gearbox, that belts out 188bhp of power and 400Nm of torque taking the Sportback from 0-100kmph in just 7.9 seconds wiht a top speed of 235kmph. At the same time, the A5 Cabriolet goes from 0-100kmph in 7.08 seconds with a top sped of 235 kmph. The one-second difference between the two cars is due to the Cabriolet’s Quattro all-wheel-drive, which the Sportback does not have.What the Sportback does get is the only petrol engine in the range – a 3-litre turbo-charged V6 that works efficiently to produce 349 bhp of power with 500 Nm of torque at its peak.The S5 Sportback, wth its Quattro all-wheel-drive is the fastest of the lot going from 0-100kmph in 4.7 seconds with a top speed of 250kmph.