Northeastern University College of Computer and Information Science Tue, 29 Jul 2014 20:21:19 +0000 en-US hourly 1 Carla Brodley Appointed Dean of the College of Computer and Information Science Tue, 29 Jul 2014 13:12:54 +0000 Carla Brodley

North­eastern Uni­ver­sity has appointed Dr. Carla E. Brodley as dean of the Col­lege of Com­puter and Infor­ma­tion Sci­ence, effec­tive Aug. 1, 2014.

Brodley comes to North­eastern from Tufts Uni­ver­sity, where she is cur­rently pro­fessor of com­puter sci­ence with a sec­ondary appoint­ment in the Clin­ical and Trans­la­tional Sci­ence Insti­tute of the Tufts Med­ical Center. From 2010 through 2013 she chaired the Depart­ment of Com­puter Sci­ence at Tufts.

She is an inter­na­tion­ally rec­og­nized researcher in machine learning and knowl­edge dis­covery in data­bases who has applied her exper­tise to prob­lems in per­son­al­ized and evidence-​​based med­i­cine, med­ical imaging, neu­ro­science, remote sensing, and com­puter secu­rity. A widely pub­lished scholar, her research has been funded by a wide range of fed­eral agen­cies, cor­po­ra­tions and foun­da­tions, among them the National Sci­ence Foun­da­tion, the National Insti­tutes of Health, NASA, DARPA, IBM, and the Mul­tiple Scle­rosis Society.

A leader in com­puting research, Dr. Brodley’s achieve­ments have con­tributed greatly to the advance­ment of the changing field of com­puter sci­ence,” said Stephen W. Director, provost and senior vice pres­i­dent for aca­d­emic affairs. “An accom­plished leader and scholar, she will take Northeastern’s lead­er­ship in com­puter sci­ence to the next level-​​both within and beyond CCIS.”

Brodley serves on the boards of the Inter­na­tional Machine Learning Society and DARPA’s Infor­ma­tion Sci­ence and Tech­nology Board. Among her many pro­fes­sional recog­ni­tions, she has received an NSF CAREER Award and mem­ber­ships to the Defense Sci­ence Study Group of DARPA and the AAAI Exec­u­tive Council.

North­eastern is a uni­ver­sity on the move and I am thrilled to be joining as the next dean of the Col­lege of Com­puter and Infor­ma­tion Sci­ence,” Brodley said. “In today’s infor­ma­tion driven age it is more impor­tant than ever to inte­grate com­puting and infor­ma­tion sci­ence into every aca­d­emic field. I look for­ward to working with fac­ulty, staff, and stu­dents to build upon the great momentum that has already made CCIS one of the nation’s most exciting inter­dis­ci­pli­nary colleges.”

She is also a member of the edi­to­rial boards of Machine Learning, Journal of Machine Learning Research, and Data Mining and Knowl­edge Dis­covery. She is co-​​chairing the 2014 con­fer­ence of the Asso­ci­a­tion for the Advance­ment of Arti­fi­cial Intel­li­gence and from 2008–2011 co-​​chaired the Com­mittee on the Status of Women in Com­puting Research.

Brodley was awarded the bachelor’s degree in math­e­matics and com­puter sci­ence from McGill Uni­ver­sity in 1985 and earned her doc­torate in com­puter sci­ence from the Uni­ver­sity of Mass­a­chu­setts at Amherst in 1994. Prior to joining Tufts, she was on the elec­trical and com­puter engi­neering fac­ulty at Purdue Uni­ver­sity, where she was hon­ored with the Ruth and Joel Spira Out­standing Teacher award in 1998. In 2010, the Uni­ver­sity of Mass­a­chu­setts rec­og­nized Brodley with the Alumni award for Out­standing Educator.

In an email to the fac­ulty of CCIS, Director thanked Larry Finkel­stein for his out­standing con­tri­bu­tions as dean of the col­lege for 12 years. “Larry’s ded­i­ca­tion to the col­lege and to the uni­ver­sity, in addi­tion to his strong lead­er­ship throughout his tenure as dean, has been key in helping the col­lege achieve the level of excel­lence it enjoys today,” he wrote.

]]> 0
The Race Against the T Mon, 28 Jul 2014 19:55:40 +0000 ravert

On Friday, Michael Ravert, CIS’16, attempted to answer a ques­tion many Bosto­nians have pos­tu­lated for years: can the average person outrun the MBTA’s Green Line?

On that day, the answer was yes. But it was pretty close.

Ravert is cur­rently working on co-​​op at Run­K­eeper, a Boston-​​based com­pany that cre­ated a GPS fitness-​​tracking app. He and three of coworkers raced a trolley on the Green Line’s B branch down Com­mon­wealth Avenue, starting from the Boston Col­lege sta­tion and ending about four miles away at Bland­ford Street sta­tion near Ken­more Square.

The final result: Ravert crossed the finish line first in a time of 24:08. The trolley made it in 24:49.

This was an awe­some expe­ri­ence,” Ravert said after run­ning. “This was a fun race to do. We did a great job pacing each other.”

Run­K­eeper and event-​​organizing web­site The Boston Cal­endar coor­di­nated the event, dubbed “Outrun the Green Line.” Ravert said he signed up because it was a great way to get to know his new col­leagues better.

An email was sent out about a month ago and I had just started my co-​​op so I fig­ured it would be fun,” Ravert said. “I didn’t really think any­thing of it until a couple weeks ago when the race really started to become pop­ular online.”

Run­K­eeper cre­ated a web­site for the event where people could mon­itor the run­ners’ and the trolley’s progress. The trolley held a size­able lead on the run­ners during the first half of the race through the hills of Boston’s Brighton and All­ston neigh­bor­hoods. But the run­ners caught up once the road got flatter.

Ravert crossed the finish line as the trolley waited at the inter­sec­tion of Com­mon­wealth Avenue and Blan­ford Street. A runner since high school, Ravert said he didn’t run any­more than usual to pre­pare for the race.

Ravert, who is studying com­puter sci­ence at North­eastern, learned about RunKeeper’s co-​​op at the university’s co-​​op fair. A friend sug­gested he look specif­i­cally at Run­K­eeper because the com­pany com­bines two of his pas­sions: run­ning and com­puter science.

On co-​​op at Run­K­eeper, Ravert has worked on pro­gram devel­op­ment for both Androids and iPhones. He said it’s been a valu­able learning expe­ri­ence thus far, par­tic­u­larly because it’s his first foray into iPhone development.

Ravert attrib­utes his work as a tutor and under­grad­uate teaching assis­tant in the Col­lege of Com­puter and Infor­ma­tion Sci­ence with helping pre­pare him for the co-​​op. “Teaching others cer­tain pro­grams that we use at Run­K­eeper helped me to under­stand them better, as well,” he said.

]]> 0
Global Impact of the Ebola Outbreak Mon, 28 Jul 2014 13:59:28 +0000 Alex VespignaniThe Ebola virus has been spreading in West Africa since March, but the cur­rent out­break over the past few weeks has reached new heights and ele­vated the crisis. More than 650 people have died, and in recent days it was learned that Sierra Leone’s leading Ebola doctor in charge of bat­tling the out­break has him­self con­tracted the virus. Here, net­work sci­en­tist Alessandro Vespig­nani, the Stern­berg Family Dis­tin­guished Pro­fessor of Physics at North­eastern who has devel­oped com­pu­ta­tional models to pre­dict the spread of infec­tious dis­eases, dis­cusses the Ebola out­break. Vespig­nani holds joint appoint­ments in the Col­lege of Sci­ence, the Bouvé Col­lege of Health Sci­ences, and the Col­lege of Com­puter and Infor­ma­tion Sci­ence.

What sparked the recent surge of the Ebola outbreak, and could it have been predicted?

Ebola’s outbreaks among human populations usually result from handling infected wild animals. Although the virus reservoir has not yet been identified with certainty, in Africa fruit bats are believed to be the natural hosts for the virus. It is therefore impossible to predict the start of an outbreak, although it is possible to project its unfolding if containment and mitigation policies are not implemented in a timely manner. Human-to-human transmission mostly occurs through blood or bodily fluids from an infected person, thus affecting mostly caregivers in the family or in healthcare settings where the proper cautions aren’t taken. Isolation of cases in well-equipped healthcare settings and the use of rigid protection protocols for handling burial procedures are crucial for the containment of outbreaks.

How is this outbreak different from those that have occurred in the past?

Since March, the World Health Organization has reported more than 1,000 cases of Ebola with a fatality rate of about 60 percent, depending on the specific places. Although previous outbreaks recorded fatality rates of up to 90 percent, this current outbreak is the worst in terms of the number of infected people. This outbreak is somewhat unique also because it has hit major urban areas such as Conakry, the capital city of Guinea. In the past, Ebola has usually emerged in less populated rural regions. Isolation and control in large cities is obviously more challenging. Capital cities are also major transportation hubs for travelers potentially spreading the outbreak in other geographical regions.

Does this outbreak present an international concern and if so, how great is that concern?

The risk of infection for travelers is minimal because infection results from direct contact with sick individuals. However the presence of the disease in major cities with airports introduces the possibility that infected people not yet in the acute stage of the disease are going to get on a plane and spread the virus internationally. This global spreading can be modeled by using human mobility network data. Although we cannot rule out the possibility of cases spreading to major European or American airport hubs, the probability for these events is quite small because the major airports in the region have limited traffic to international destinations. On the other hand, the persistence in time of the outbreak and the growing number of cases are increasing the probability that we might see it spread internationally. The makes it imperative to win the battle in containing the outbreak in the region as soon as possible.


]]> 0
Tablelist Collects $1.5 million, with Visions of Becoming the OpenTable of After Dark Wed, 23 Jul 2014 18:20:19 +0000 In his prior life as an real estate broker for international students in Boston, Julian Jung went to a lot of nightclubs and saw a lot of bottles of high-end liquor ordered. Jung says he has witnessed upwards of $15,000 spent in a single night in Boston, and has seen epic $50,000 evenings in New York.teampic-tablelist Yet he was surprised how complicated it was for big spenders and groups of friends out for a special occasion to nab a table at a nightclub. “I was using apps like Uber and Hotel Tonight a lot,” says Jung, who graduated from Northeastern University last year. “And I was surprised that there wasn’t a way to book tables at clubs and lounges, and get that level of convenience that I wanted.” Jung says there are more than 10,000 lounges, bars, and roof decks that offer some sort of VIP service.

His solution, the mobile app Tablelist, went live last November in Boston, and has since expanded to Las Vegas, New York City, and the Hamptons. “Like Uber, we have options that go from the taxi level up to a luxury SUV,” Jung says. “We have $150 tables you can reserve at a lounge, all the way up to $5000 or $7000 tables. We did a $7000 table in the Hamptons recently.” These “bottle service” or “table service” reservations include bottles of wine or liquor.

Last week, Jung wrapped up a $1.5 million funding round on the site AngelList; new backers include Boston-based Twitter exec Wayne Chang and Jason Carroll of the hedge fund Hudson River Trading. That brings the total the seven-person startup has raised to $2 million.

The Tablelist app is available for iPhone and Android. After entering a credit card, you can browse tables available for a given night and make a reservation. Then, you choose the brands of Champagne or liquor you’d like included. When you arrive at the club, there’s no waiting in line. Tablelist takes a 15 to 25 percent fee of the gross booking. (From left in the photo: Tablelist team members Brin Chartier, Julian Jung and Alex Johnson.)

Tablelist currently works with about 30 venues in and around Boston, including Shrine at Foxwoods, and 40 in New York. Jung says that Los Angeles and Washington, D.C. are next. Kyla Moore, formerly at the Speakeasy Hospitality Group in Boston, which runs clubs like Tunnel and Minibar, handles venue relationships for the startup.

Just as Uber introduced more urbanites to town car service, Jung has visions of “opening up table service to more of the mass market. We’re demystifying this industry,” he says, which has previously relied on promoters and concierges to fill VIP tables. And he sees natural opportunities for expansion. “Our clients want us to handle everything related to nightlife: tickets to shows, the reservation at a great restaurant, and so on.”

Tablelist in based near South Station in Boston, at the WeWork shared space.

Article from BetaBoston
]]> 0
Virtual Health Assistants and Trust in Patient Care Thu, 10 Jul 2014 12:54:10 +0000 Doctors, patients and insurance companies don’t agree on much. But ask them what they think of call centers, and you can rest assured you’ll get complaints from every quarter. 

Not only do patients dread having to navigate cumbersome voicemail menus to get even the simplest medical information, but doctors, nurses, administrators – and just about everyone in the medical community – also dislike them. Even call center operators are frustrated by them.

They cost a lot of money, waste a lot of time and often leave more questions unanswered than answered.

And yet, the entire medical industry relies on them. Why?

There hasn’t really been a better alternative: when patients have a specific question about their health, nothing has been able to match a live person’s ability to help them.

Companies have been trying to solve the problem by creating websites, chock full of information. But the sites are often just as confusing to navigate, with their morass of tabs, links and FAQ sections that rarely answer every question.

But there is a way to enhance websites and smartphone apps, turning them into tools that deliver personalized service that equals what live representatives can offer, with added value that a call center simply can’t match.

Imagine going to a website or logging into an app in which you were greeted by a friendly virtual assistant who asked what you wanted – and then immediately gave you the right answer – or directed you to the right place.

That’s the promise of virtual health assistants (VHAs) — digital representatives (also called avatars) that live on websites, smart phones and other devices.

VHAs are going to see increasing use in the healthcare arena, where they can do everything from answering billing questions to encouraging patients to remain adherent to treatment and wellness regimens.

They can save companies money while empowering patients to achieve better outcomes. Infinitely scalable, they can help millions people navigate all kinds of information and deliver the high-touch, proactive engagement that call centers can’t afford to offer.

Why Virtual Health Assistants are Necessary

Left on their own, patients often make small mistakes that affect their health in big ways. Those simple mistakes include everything from forgetting to take their medicine to delaying or avoiding treatment to failing to schedule important health tests like mammograms and colonoscopies.

There are already other technology-driven solutions, such as phone applications, medication text reminder systems, smart medication bottles and shipments of medication and goods that are automatically sent when an old prescription is due to run out. Those help, but they don’t go far enough.

Virtual health assistants accomplish what other solutions can – and a lot more.

To be clear, VHAs won’t replace real humans. In fact, they work in conjunction with them. They are an engagement technology that are infused with the knowledge of a specific domain, therapy or wellness regimen as deciphered by each client. They are infinitely scalable, therefore saving money by addressing issues once reserved for call centers and healthcare professionals. When they can’t answer a question, they are programmed to direct you to the person or place that can.

For instance, VHAs can track individual health needs and send out reminders (with the full consent of patients) as often as a patient needs them. They can then communicate with healthcare providers (again with full consent) to help doctors figure out treatment plans.

As the number of patients outstrips the number of healthcare professionals available to serve them, VHAs will allow scalable, effective provision of wellness, prevention and disease- management care.

Perhaps most importantly, VHAs can actually converse and empathise with patients using real language, thereby developing relationships with them. That ability changes the whole equation.

VHAs are there for patients to answer personal questions about such delicate topics as sexual function to patients on specific types of medication. In fact, in some cases, it turns out that it actually can be easier to talk to a VHA than a real person, who might be full of judgments. And achieving this level of trust is something new and important in VHA capabilities.

How VHA’s Establish Relationships with Patients

These assistants are there to help. They work because they don’t rely merely on voice recognition software, and technology has improved to the point where they now are able to use and understand natural language.

But in addition to being able to converse with people, VHAs use personal data and context to establish emotional and social relationships much in the same way that people do – by delivering valuable information.

For instance, your VHA might start a conversation by telling you about the local weather and traffic conditions, which it knows because it can read (with permission, of course) the GPS embedded in your smartphone and has access to data sites that store such information.

Once your VHA is talking to you, it might offer you something useful, such as alternate routes. Then, it might quiz you about what you ate for breakfast and when you planned to exercise.

Sure, you understand that the VHA a computer program, but because it gives you real information you can use, you begin to trust it, just as you would a real person.

In other words, you’re making a real connection with your VHA and will likely grow more emotionally attached to and dependent on the technology to be there.

That shouldn’t sound far-fetched. Humans make emotional connections with objects all the time. From their earliest years, most toddlers cling to a favorite inanimate object, such as a rattle or a fuzzy blanket. When we grow older, we become attached to other things. Some 55% of us would give up caffeine and 70% would give up alcohol before giving up their smart phones, according to a recent survey.

The point is, that we can develop a human- like connection with objects – including VHAs. Dr. Timothy Bickmore, a professor at Northeastern University calls this type of connection a “therapeutic alliance.” He likens it to the relationship patients once had with a neighborhood doctor or pharmacist.

Because patients develop this alliance, they learn to trust virtual assistants. That’s what leads to real change. Science shows that human behavior changes when several factors are present: people must be motivated, they must have the ability to change and they must be spurred to change by a specific trigger, according to B.J. Fogg, a popular behavioral psychologist.

Virtual assistants also have the advantage of being with patients 24 hours a day, with the ability to engage them at the precise moments they want and need to interact.

That means that the (virtually endless) tasks and activities a VHA can facilitate and monitor — actions like being able to answer questions in real-time about medications, proactively and discreetly answer sensitive questions, or even provide on-going measurements of disability progressions — actively work to build an invaluable and necessary level of trust into patients’ care.

In short, VHAs will have a central role in how we interact with a broader digital world, allowing patients to make the most of the vast resources available to them and offering healthcare professionals a means to deliver a high level of personalized service without having to employ more people.

There is no easier way to connect with patients than by talking to them. It’s the one, natural way of gathering real, unfiltered patient data. That’s something that our current inefficient call-center patient service models can not provide, and virtual health assistants are already being deployed to transform healthcare.
Article from Healthcare IT News
]]> 0
Guest Post: A Facebook Apologia Tue, 08 Jul 2014 20:27:03 +0000 179693002

As you might have heard, Face­book recently released a study in col­lab­o­ra­tion with researchers at Cor­nell Uni­ver­sity on the spread of emo­tional sen­ti­ment through the social net­work.  It has spurred a huge, media-​​fueled debate among the main­stream public. Below is a post that Brian Keegan, a post-​​doctoral researcher in North­eastern pro­fessor David Lazer’s lab, wrote on his per­sonal blog explaining the sci­ence, the debate, and its poten­tial impacts on the field.

Last week, the Pro­ceed­ings of the National Academy of Sci­ence (PNAS) pub­lished a study that con­ducted a large-​​scale exper­i­ment on Face­book. The authors of the study included an industry researcher from Face­book as well as aca­d­e­mics at the Uni­ver­sity of Cal­i­fornia, San Fran­cisco and Cor­nell Uni­ver­sity. The study employed an exper­i­mental design that reduced the amount of pos­i­tive or neg­a­tive emo­tional con­tent in 689,000 Face­book users’ news feeds to test whether emo­tions are contagious. The study has since spawned a sub­stan­tial con­tro­versy about the methods usedextent of its reg­u­la­tion by aca­d­emic insti­tu­tions’ review boardthe nature of par­tic­i­pants’ informed con­sentthe ethics of the research design itself, and the need for more explicit opt-​​in pro­ce­dures.

In the face of even-​​tempered thinking from a gath­ering mob, I want to defend the exe­cu­tion and impli­ca­tions of this study. Others have also made sim­ilar argu­ments [1,2,3], I guess I’m just a slow blogger. At the outset, I want to declare that I have no direct stake in the out­come of this brouhaha. How­ever, I do have pro­fes­sional and per­sonal rela­tion­ships with sev­eral mem­bers of the Face­book Data Sci­ence team (none of whom are authors on the study), although the entirety of this post reflects only public infor­ma­tion and my opin­ions alone.

First, as is common in the ini­tial reporting sur­rounding on sci­en­tific find­ings, there was some mis­in­for­ma­tion around the study that was greatly mag­ni­fied. These early crit­i­cisms claimed the authors mis-​​represented the size of the observed effects (they didn’t) or the research wasn’t reviewed by the aca­d­emic boards charged with human sub­jects pro­tec­tion (it was). There is like­wise a per­ni­cious ten­dency for the sci­en­tific con­cept of exper­i­mental manip­u­la­tion to be mis­in­ter­preted as the homo­phone implying decep­tion and chi­canery: there is no inherent mali­cious­ness in ran­domly assigning par­tic­i­pants to con­di­tions for exper­i­mental study. Other reporting on the story has sen­sa­tion­al­is­ti­cally implied users were sub­jected to injec­tions of neg­a­tive emo­tional con­tent so their resulting depres­sion could be more fully quan­ti­fied. In reality, the study actu­ally only with­held either pos­i­tive or neg­a­tive con­tent from users, which resulted in users seeing more of posts they would have seen anyway. In all of these, the hys­teria sur­rounding a “Face­book manip­u­lates your emo­tions” or is “trans­mit­ting anger” story got well ahead of any sober reading of the research reported by the authors in the paper.

Second on the sub­stance of the research, there are still serious ques­tions about the validity of method­olog­ical tools used , the inter­pre­ta­tion of results, and use of inap­pro­priate con­structs. Pres­ti­gious and com­pet­i­tive peer-​​reviewed jour­nals like PNAS are not immune from pub­lishing studies with half-​​baked analyses. Pre-​​publication peer review (as this study went through) is impor­tant for serving as a check against faulty or improper claims, but post-​​publication peer review of scrutiny from the sci­en­tific community—and ide­ally replication—is an essen­tial part of sci­en­tific research. Pub­lishing in PNAS implies the authors were seeking both a wider audi­ence and a height­ened level of scrutiny than pub­lishing this paper in a less promi­nent outlet. To be clear: this study is not without its flaws, but these debates, in of them­selves, should not be taken as evi­dence that the study is irrec­on­cil­ably flawed. If the bar for pub­li­ca­tion is antic­i­pating every poten­tial objec­tion or addressing every method­olog­ical lim­i­ta­tion, there would be pre­cious little schol­ar­ship for us to dis­cuss. Debates about the con­structs, methods, results, and inter­pre­ta­tions of a study are cru­cial for syn­the­sizing research across dis­ci­plines and increasing the quality of sub­se­quent research.

Third, I want to move to the issue of epis­te­mology and framing. There is a pro­found dis­con­nect in how we talk about the ways of knowing how sys­tems like Face­book work and the ways of knowing how people behave. As users, we expect these sys­tems to be respon­sive, effi­cient, and useful and so com­pa­nies employ thou­sands of engi­neers, product man­agers, and usability experts to create seam­less expe­ri­ences.  These user expe­ri­ences require diverse and iter­a­tive methods, which include A/​B testing to com­pare users’ pref­er­ences for one design over another based on how they behave. These tests are per­va­sive, active, and on-​​going across every con­ceiv­able online and offline envi­ron­ment from couponing to product recommendations. Creating expe­ri­ences that are “pleasing”, “intu­itive”, “exciting”, “over­whelming”, or “sur­prising” reflects the fun­da­men­tally psy­cho­log­ical nature of this work: every A/​B test is a psych experiment.

Some­where deep in the fine print of every loy­alty card’s terms of ser­vice or online account’s pri­vacy policy is some lan­guage in which you con­sent to having this data used for “trou­bleshooting, data analysis, testing, research,” which is to say, you and your data can be sub­ject to sci­en­tific obser­va­tion and exper­i­men­ta­tion. Whether this con­sent is “informed” by the par­tic­i­pant having a con­scious under­standing of impli­ca­tions and con­se­quences is a very dif­ferent ques­tion that I sus­pect few com­pa­nies are pre­pared to defend. But why does a framing of “sci­en­tific research” seem so much more prob­lem­atic than con­tributing to “user expe­ri­ence”? How is pub­lishing the results of one A/​B test worse than knowing nothing of the thou­sands of invisble tests? They reflect the same sub­stan­tive ways of knowing “what works” through the same well-​​worn sci­en­tific methods.

Fourth, there has been no sub­stan­tive dis­cus­sion of what the design of informed con­sent should look like in this con­text. Is it a blanket opt-​​in/​out to all exper­i­men­ta­tion? Is con­sent needed for every single A/​B iter­a­tion or only those intended for sci­en­tific research? Is this choice buried along­side all the other com­plex pri­vacy but­tons or are users expected to manage pop-​​ups requesting your participation? I sus­pect the omnipresent secu­rity dia­logues that Win­dows and OS X have adopted to warn us against installing soft­ware have done little to reduce risky behavior. Does adding another layer of com­plexity around informed con­sent improve the cur­rent anx­i­eties around man­aging com­plex pri­vacy settings? How would users go about dif­fer­en­ti­ating offi­cial requests for informed con­sent from abu­sive apps, spam­mers, and spoofers? Who should be charged with enforcing these rules and who are they in turn account­able to? There’s been pre­cious little on designing more informed con­sent archi­tec­tures that bal­ance usability, plat­form affor­dances, and the needs of researchers.

Fur­ther­more, we might also con­sider the  ethics of this nascent socio-​​technical NIM­BYism. Researchers at Penn State have looked at the design of pri­vacy autho­riza­tion dia­logues for social net­works but found that more fine-​​grained con­trol over dis­clo­sure reduced adop­tion levels.  We demand ever more respon­sive and pow­erful sys­tems while cir­cum­scribing our con­tri­bu­tions but demanding ben­e­fits from other’s con­tri­bu­tions. I image the life of such sys­tems would be poor, nasty, brutish, and short. Do more obtru­sive inter­ven­tions or incom­plete data col­lec­tion in the name con­ser­v­a­tive inter­pre­ta­tions of informed con­sent pro­mote better sci­ence and other public goods? What are the spe­cific harms that we should strive to limit in these sys­tems and how might we re-​​tailor 40 year old poli­cies to these ends?

I want to wrap up by shifting the focus of this con­ver­sa­tion from debates about a study that was already done to what should be done going for­ward. Some of the more extreme calls I’ve seen have advo­cated for aca­d­emic soci­eties or insti­tu­tions to inves­ti­gate and dis­ci­pline the authors, others have called for embar­going studies using Face­book data from schol­arly pub­li­ca­tion, and still others have encour­aged Face­book employees to quit in protest of a single study.  All this man­ning of bar­ri­cades strikes me as a grave over-​​reaction that could have calami­tously chilling effects on sev­eral dimen­sions.  If our over­riding social goal is to min­i­mize real or poten­tial harm to par­tic­i­pants, what best accom­plishes this going forward?

Cer­tainly expelling Face­book from the “com­mu­nity of scholars” might damage its ability to recruit researchers. But are Face­book users really made safer by replacing its cur­rent crop of data sci­en­tists who have superla­tive social sci­ence cre­den­tials with engi­neers, mar­keters, and product man­agers trying to ride method­olog­ical bulls they don’t under­stand? Does Face­book have greater out­side insti­tu­tional account­ability by closing down aca­d­emic col­lab­o­ra­tions and shut­ting papers out from peer review and pub­li­ca­tion? Are we better able to know the poten­tial influ­ence Face­book wields over our emo­tions, rela­tion­ships, and pol­i­tics by dis­cour­aging them from pub­licly dis­closing the tools they have developed?  Is raising online mobs to attack industry researchers con­ducive to starting dia­logues to improve their processes for informed con­sent? Is pub­licly under­mining other sci­en­tists the right strategy for pro­moting evidence-​​based policy-​​making in an increas­ingly hos­tile polit­ical climate?

Need­less to say, this episode speaks for the need for rap­proche­ment and sus­tained engage­ment between industry and aca­d­emic researchers. If you care about research ethics, informed con­sent, and well-​​designed research, you want com­pa­nies like Face­book deeply embedded within and respon­sible to the broader research com­mu­nity. You want the values of social sci­en­tists to influ­ence the prac­tice of data sci­ence, engi­neering,  user expe­ri­ence, and mar­keting teams. You want the campus to be open to vis­iting aca­d­emic researchers to explore, col­lab­o­rate, and repli­cate. You want industry research to be held to academia’s more strin­gent stan­dards of human sub­jects pro­tec­tion and reg­u­larly shared through peer-​​reviewed publication.

The Face­book emo­tional con­ta­gion study demands a re-​​evaluation of pre­vailing research ethics, design values, and algo­rithmic powers in mas­sive net­worked architectures. But the cur­rent reac­tion to this study can only have a chilling effect on this debate by removing a unique form respon­sible dis­clo­sure through aca­d­emic col­lab­o­ra­tion and pub­lishing. This study is guar­an­teed to serve as an impor­tant case study in the pro­fes­sion­al­iza­tion of data sci­ence. But aca­d­emic researchers should make sure their reac­tions do not unin­ten­tion­ally inoc­u­late industry against the values and per­spec­tives of social inquiry.

]]> 0
New Grants Target Innovative Teaching Strategies Tue, 08 Jul 2014 20:21:53 +0000 Creative Industries

Seven fac­ulty research projects have been selected for grant funding through a new ini­tia­tive that pro­motes inno­v­a­tive teaching approaches in the class­room and advances under­grad­uate learning at Northeastern.

The com­pet­i­tive grant pro­gram, spon­sored by the Office of the Provost, is designed to pro­mote explo­ration and inno­va­tion in teaching and learning by sup­porting evidence-​​based activ­i­ties that result in deeper learning. The pro­gram launched this year and will be offered annu­ally going forward.

The goal is to enhance stu­dent learning,” said Susan Ambrose, senior vice provost for under­grad­uate edu­ca­tion and expe­ri­en­tial learning at North­eastern. “We are con­tin­u­ally looking to improve the quality of edu­ca­tion we pro­vide our stu­dents. We have cre­ative and inno­v­a­tive fac­ulty, and we wanted to pro­vide these grants to allow them to do things that haven’t been done before.”

The seven research projects selected were sub­mitted by an inter­dis­ci­pli­nary group of fac­ulty. In one project, researchers from the Depart­ment of Phys­ical Therapy will create 3-​​D models of internal body parts that will fur­ther stu­dents’ under­standing of cross-​​sectional anatomy, par­tic­u­larly the brain. In another project, game design fac­ulty in the Col­lege of Arts, Media, and Design and Col­lege of Com­puter and Infor­ma­tion Sci­ence will inte­grate a com­puter game they are devel­oping called Mad Sci­ence into the class­room for expe­ri­en­tial learning. The game allows users to create and par­tic­i­pate in sci­en­tific exper­i­ments on social behavior in fun and engaging ways.

Another project—led by Hubert Ho, lec­turer in the Depart­ment of Music, and Michael Epstein, asso­ciate pro­fessor in the Depart­ment of Speech Lan­guage Pathology and Audi­ology—involves devel­oping a new course for stu­dents across a range of aca­d­emic pro­grams who are inter­ested in music and sound per­cep­tion. The course would fea­ture a variety of inter­ac­tive learning tools and would be tai­lored to fit the indi­vidual stu­dents’ needs, so they could focus on sec­tions they’re unfa­miliar with and skip over those they’ve pre­vi­ously cov­ered in other courses. At the end, all stu­dents would have a basic under­standing of the topics, which range from music theory and sound physics to music and hearing research. The course would pro­mote inter­dis­ci­pli­nary col­lab­o­ra­tion and could serve as a model for future courses on other topics.

The pro­gram aligns with Northeastern’s com­mit­ment to high-​​quality under­grad­uate edu­ca­tion and con­tinued sup­port of pro­grams and projects that inno­vate and enrich under­grad­uate learning. This com­mit­ment includes an emphasis on use-​​inspired research that addresses global chal­lenges and the university’s long­standing expe­ri­en­tial edu­ca­tion model, anchored in its sig­na­ture co-​​op program.

Fac­ulty will present their research project updates and find­ings at a con­fer­ence on May 5, 2015 and spon­sored by the Center for Advancing Teaching and Learning Through Research, which also pro­vided con­sul­ta­tion and resources to fac­ulty who sub­mitted proposals.

A com­mittee com­prising fac­ulty from all col­leges reviewed the pro­posals, judging each on the orig­i­nality of the research approach, the impact the project will have on stu­dents, and whether the project includes an assess­ment com­po­nent to learn from and under­stand how well the project worked. Ambrose noted that the research projects’ long-​​term sus­tain­ability was an impor­tant factor. The goal was to iden­tify new teaching mod­ules and approaches that could be easily inte­grated into cur­rent cur­ricula and evolve over time.

We thought these seven were fresh approaches, and that’s what we were looking for,” she said. “You can develop some­thing new out of some­thing old, but we were looking for some­thing that is orig­inal and sus­tain­able in the long term.”

]]> 0
Massachusetts Open Cloud Project Hopes To Create Ad-Hoc Infrastructure Marketplace Wed, 02 Jul 2014 18:30:27 +0000

Today, the cloud infrastructure market is dominated by several big companies – Amazon, Google and Microsoft — but a public/business/academia partnership called the Massachusetts Open Cloud project is hoping to change that by creating an open computing marketplace where you can negotiate whatever services you need from multiple infrastructure vendors.

Peter Desnoyers, a professor at Northeastern University who helped launch the project, explained that while companies like Amazon offer useful services, they have limitations.

First of all, from an academic perspective, they have a closed system. That means their internal team has access to the system for research purposes, but anyone outside the company like academics who want to study the system and present papers are shut out. While they can go to company conferences and hear employees present papers, they can’t get deep inside the system and that’s a real problem for him and his fellow academics.

The other is that Amazon and other IaaS vendors offer what he calls the “Henry Ford” approach to IaaS. You can have any color you want as long it’s black. In other words, they have certain products they have packaged together. The trouble with this approach though, Desnoyers explained, is that people often have very specialized requirements, and the way Amazon designs its products shuts those people out or makes it prohibitively expensive if they need specialized services.

Desnoyers says that the project hopes to create a marketplace where multiple vendors can come together and offer their services in an ad-hoc kind of way, so you might get your compute power from one vendor, your storage from a second and your memory from a third. The vendors seem like to this approach and include industry heavyweights Cisco, Juniper, Intel, Red Hat and others.

The colleges involved include Harvard, MIT, UMass Amherst, Boston University and Northeastern.

The Commonwealth of Massachusetts is also involved and the project will be housed at the Massachusetts Green High Performance Computing Center in Holyoke, Mass.

Vendors will contribute equipment and engineering talent and the goal of the project is to create a commercial project based on open source tools.

One vendor involved with the MOC project is Red Hat, and Jan Mark Holtzer, who is senior consulting engineer for the CTO office at Red Hat says his company can learn a lot from a project like this.

“For us I would see the key opportunities we see around MOC is operational access, understanding large scale cloud infrastructure, and growing skills [around these areas]. We will rotate resources from support and consulting organizations so they can get first hand experience.”

Holtzer says the initial use case for the project probably involves getting vast computing resources for a short period of time to meet a specific need. “Clearly currently the initial use case we see and MOC sees is probably driven by [high performance computing] and MOC would give customers the capability of harvesting large amount of resources and then releasing them quickly,” he said.

He says, however, before it becomes a viable commercial entity for vendors like Red Hat, he sees potential as an incubation space for innovation where participants can experiment with different business models and Service Level Agreements (SLAs).

But perhaps the biggest advantage of being involved in a project like this from a vendor perspective is very similar to the academic one. They can get real data about how large-scale systems like this work. “Probably the very interesting use case is the ability to get the operational data from such a large scale environment. A lot of cloud services are black boxes. We work with these vendors, but we don’t have the ability to get as much information from inside a large scale infrastructure,” he said.

Holtzer added that there is a huge advantage in making the MOC project operational data transparent and visible.

The fact is there are lots of cloud infrastructure options available out there, but no open marketplace where people can negotiate pricing and access different pieces of the infrastructure. A project like this is at least a starting point for offering a more open way of selling infrastructure services moving forward.

For now it’s experimental, but if it works, it has the potential to change the way enterprise customers interact with and deal with IaaS vendors and that’s significant in itself.

Article from TechCrunch

]]> 0
New Master’s Program to Prepare Future Security Professionals Thu, 19 Jun 2014 13:03:00 +0000 Stephen Flynn

North­eastern will launch a first-​​of-​​its-​​kind, inter­dis­ci­pli­nary grad­uate pro­gram this fall to train the next gen­er­a­tion of secu­rity pro­fes­sionals to face the new and evolving chal­lenges of the 21st cen­tury. Through this pro­gram, stu­dents will be equipped with skills in tra­di­tional secu­rity studies as well as training in tech­nical fields such as cyber­se­cu­rity policy, busi­ness sus­tain­ability, and urban coastal resilience.

The master of sci­ence in secu­rity and resilience studies will be offered through the Col­lege of Social Sci­ences and Human­i­ties in coor­di­na­tion with the Col­lege of Engi­neering and Col­lege of Com­puter and Infor­ma­tion Sci­ence. Stephen Flynn, co-​​director of Northeastern’s George J. Kostas Research Insti­tute for Home­land Secu­rity and director of the Center for Resilience Studies, designed the program.

Flynn, who is a pro­fessor of polit­ical sci­ence, said the country’s national secu­rity shouldn’t merely focus on iden­ti­fying and engaging poten­tial threats or enemies. Security efforts, strate­gies, and poli­cies must also rec­og­nize the wide range of haz­ards and vulnerabilities—from cybert­er­rorism to nat­ural disasters—that place the U.S. pop­u­la­tion and infra­struc­ture at risk, and they must make those sys­tems more secure and resilient. These sys­tems include the energy and trans­porta­tion sec­tors, supply chains, and com­mu­ni­ca­tions networks.

A good national defense involves more than going after our adver­saries over­seas,” Flynn said. “Stu­dents will learn how to assess and manage the many risks to our com­mu­ni­ties, crit­ical infra­struc­ture, and the global net­works we depend on for our way of life and quality of life.”

The pro­gram aligns with Northeastern’s national lead­er­ship in research ini­tia­tives built around secu­rity, which is one of the university’s core research themes along with health and sustainability.

The pro­gram is geared toward both recent under­grad­u­ates and mid-​​career pro­fes­sionals, including vet­erans. Stu­dents can choose between a year­long full-​​time track or a part-​​time track, both of which will be offered through the university’s hybrid format of class­room and online learning. They can spe­cialize in one of three areas: cyber­space policy; admin­is­tra­tion, man­age­ment, and policy; or coun­tert­er­rorism. The pro­gram will include a cap­stone project in which stu­dents work in the field or con­duct super­vised research on improving secu­rity and resilience.

Flynn said his three decades of expe­ri­ence in the secu­rity field informed the program’s design. His expe­ri­ence includes serving in the White House, advising the 9/​11 Com­mis­sion, and con­sulting with industry. “The secu­rity field is dynamic and is rapidly growing out­side its tra­di­tional base,” Flynn said. “This pro­gram is designed to meet what gov­ern­ment offi­cials and industry leaders have told me are their work­force needs.

We are prob­ably going to end up writing the text­books for this emerging field because there aren’t any right now,” he added. “I’ve been working on this issue for most of my pro­fes­sional career, and I drew on my expe­ri­ences and numerous resources to make this cur­riculum cut­ting edge.”

]]> 0
Schools Adding Computer Coding to Curriculum Wed, 21 May 2014 17:27:23 +0000 Students as young as kindergartners are learning computer programming as Massachusetts schools join a growing national movement to prepare students for 21st-century jobs.

Once considered an extracurricular activity for geeks, coding increasingly is being seen as both an essential life skill and a potential pathway toward becoming the next Mark Zuckerberg.

“The coding craze is the biggest uptick in education in years,” said Elliot Soloway, a professor of computer science at the University of Michigan. “What we’re seeing is a new content area being incorporated into the curriculum at lightning speed, faster than any other content area has been assimilated into teaching. Schools are desperate for the new, and coding’s not just any new; it’s one that has currency.“

Since December, 1 million students have enrolled in an online computer science course offered by, a nonprofit founded by Harvard graduate Hadi Partovi and backed by Facebook’s Zuckerberg and Microsoft’s Bill Gates. The group recently announced partnerships with 30 school districts, including Andover, Arlington, Ashland, Brookline, Littleton, Milton, Needham, Newton, Reading, Waltham, Wayland and Wellesley.

Newton Public Schools is using to help teach the foundational skills for coding in grade 2 and build on that in middle school, said Leo Brehm, the district’s director of information technology.

This summer, a Newton North High School teacher also will undergo training through to teach basic computer science in grades 9 and 10 and more advanced classes in grades 11 and 12, Brehm said.

“Computer science is a natural outlet to exercise critical thinking and problem-solving skills, and it enhances mathematics, engineering and robotics,” Brehm said. “There’s also a shortage of people in the workforce who can code.”

Boston Public Schools expects to become a formal district partner of within the next year or two, after it prepares its schools from an infrastructure and staffing standpoint — work that is well underway, a BPS spokesman said.

Article from The Boston Herald

]]> 0