Tag Archives: computer science

Let’s Transform Canada’s AI Research Into Real World Adoption

October, 2025 – Canada has world-class strength in AI research but continues to fall short in widespread adoption, according to a new report from the C.D. Howe Institute. On the heels of the federal government’s announcement of a new AI Strategy Task Force, the report highlights the urgent need to bridge the gap between research excellence and real-world adoption.

In “AI Is Not Rocket Science: Ideas for Achieving Liftoff in Canadian AI Adoption,” Kevin Leyton-Brown, Cinda Heeren, Joanna McGrenere, Raymond Ng, Margo Seltzer, Leonid Sigal, and Michiel van de Panne note that while Canada ranks second globally in top-tier AI researchers and first in the G7 for per capita publications, it is only 20th in AI adoption among OECD countries. “This matters for the economy as a whole, because such knowledge translation is a key vehicle for productivity growth,” the authors say. “It is terrible news, then, that Canada experienced almost no productivity growth in the last decade, compared with a rate 15 times higher in the United States.”

The authors argue that new approaches to knowledge translation are needed because AI is not “rocket science”: instead of focusing on a single industry sector, the discipline develops general-purpose technology that can be applied to almost anything. This makes it harder for Canadian firms to find the right expertise and for academics to sustain ties with industry. Existing approaches – funding academic research, directly subsidizing industry efforts through measures such as SR&ED and superclusters, and promoting partnerships through programs like Mitacs and NSERC Alliance – have not solved the problem.

Four ideas to help firms leverage Canadian academic strength to fuel their AI adoption include: a concierge service to match companies with experts, consulting tied to graduate student scholarships, “research trios” that link AI specialists with domain experts and industry, and a major expansion of AI training from basic literacy to dedicated degrees and continuing education. Drawing on their experiences at the University of British Columbia, the authors show how local initiatives are already bridging gaps between academia and industry – and argue these models should be scaled nationally.

“Canada’s unusual strength in AI research is an enormous asset, but it’s not going to translate into real-world productivity gains unless we find better ways to connect AI researchers and industrial players,” says Kevin Leyton-Brown, professor of computer science at the University of British Columbia and report co-author. “The challenge is not that AI is too complicated – it’s that it touches everything. That means new models of partnership, new incentives, and new approaches to education.”

AI Is Not Rocket Science- 4 Ideas in Detail

Idea 1: A Concierge Service for Matchmaking

We have seen that it is hard for industry partners to know who to contact when they want to learn more about AI. Conversely, it is at least as hard for AI experts to develop a broad enough understanding of the industry landscape to identify applications that would most benefit from their expertise. Given the potential gains to be had from increasing AI adoption across Canadian industry, nobody should be satisfied with the status quo.

We argue that this issue is best addressed by a “concierge service” that industry could contact when seeking AI expertise. While matchmaking would still be challenging for the service itself, it could meet this challenge by employing staff who are trained in eliciting the AI needs of industry partners, who understand enough about AI research to navigate the jargon, and who proactively keep track of the specific expertise of AI researchers across a given jurisdiction. This is specialized work that not everyone could perform! However, many qualified candidates do exist (e.g., PhDs in the mathematical sciences or engineering). Such staff could be funded in a variety of different ways: for example, by an AI institute; a virtual national institute focused on a given application area; a university-level centre like UBC’s Centre for Artificial Intelligence Decision-making and Action (CAIDA); a nonprofit like Mitacs; a provincial ministry for jobs and economic growth; or the new federal ministry of Artificial Intelligence and Digital Innovation.

Having set up an organization that facilitates matchmaking, it could make sense for the same office to provide additional services that speed AI adoption, but that are not core strengths of academics. Some examples include project management, programming, AI-specific skills training and recruitment, and so on. Overall, such an organization could be funded by some combination of direct government support, direct cost recovery, and an overhead model that reinvests revenue from successful projects into new initiatives.

Idea 2: Consultancy in Exchange for Student Scholarships

Many businesses that would benefit from adopting AI do not need custom research projects and do not want to wait a year or more to solve their problems. The lowest-hanging fruit for Canadian AI adoption is ensuring that industry is well informed about potentially useful, off-the-shelf AI technologies. We thus propose a mechanism under which AI experts would provide limited, free consulting to local industry. AI experts would opt in to being on a list of available consultants. A few hours of advice would be free to each company, which would then have the option of co-paying for a limited amount of additional consulting, after which it would pay full freight if both parties wanted to continue. The company would own any intellectual property arising from these conversations, which would thus focus on ideas in the public domain. If the company wanted to access university-owned IP, it could shift to a different arrangement, such as a research contract. This system would work best given a concierge service like the one we just described. The value offered per consulting hour clearly depends on the quality of the academic–industry match, and some kind of vetting system would be needed to ensure the eligibility of industry participants.

Why would an AI expert sign up to give advice to industry? All but the best-funded Canadian faculty working in AI report that obtaining enough funding to support their graduate students is a major stressor. Attempting to establish connections with industry is hard work, and such efforts pay off only if the industry partner signs on the dotted line and matching funds are approved. There is thus space to appeal to faculty with a model in which they “earn” student scholarships for a fixed amount of consulting work. For example, faculty could be offered a one- semester scholarship for every eight hours set aside for meetings with industry, meaning that one weekly “industry office hour” would indefinitely fund two graduate students. Consulting opportunities could also be offered directly to postdoctoral fellows or senior (e.g., post-candidacy) PhD students in exchange for fellowships. In such cases, trainees should be required to pass an interview, certifying that they have both the technical and soft skills necessary to succeed in the consulting role. The concierge service could help decide which industry partners could be routed to PhD students and which need the scarcer consulting slots staffed by faculty members.

The system would offer many benefits. From the industry perspective, it would make it straightforward to get just an hour or two of advice. This might often be enough to allow the company to start taking action towards AI adoption: there is a rich ecosystem of high-performance, reliable, and open-source AI tools; often, the hard part is knowing what tool to use in what way. Beyond the value of the advice itself, consulting meetings offer a strong basis for building relationships between academics and industry representatives, in which the academic plays the role of a useful problem solver rather than of a cold-calling salesperson. These relationships could thus help to incubate Mitacs/Alliance-style projects when research problems of mutual interest emerge (though also see our idea below about how restructuring such projects could help further).

For academics, the system would constitute a new avenue for student funding that would reward each hour spent with a predictable amount of student support. Furthermore, it would offer scaffolded opportunities to deepen connections with industry. The system would come with no reporting requirements beyond logging the time spent on consulting. The faculty member would be free to use earned scholarships to support any student (regardless, for example, of the overlap between the student’s research and the topics of interest to companies), increasing flexibility over the Mitacs/Alliance system, in which specific students work with industry partners. Students who self-funded via consulting would learn valuable skills and would expand their professional networks, improving prospects for post-graduation employment.

Finally, the system would also offer multiple benefits from the government’s perspective. It would generate unusually high levels of industrial impact per dollar spent (consider the number of contact hours between academia and industry achieved per dollar under the funding models mentioned in Section 3). All money would furthermore go towards student training. The system would automatically allocate money where it is most useful, directing student funding to faculty who are both eager to take on students and relevant to industry, all without the overhead of a peer-review process. And it would generate detailed impact reports as a side effect of its operations, since each hour of industry–academia contact would need to be logged to count towards student funding.

Idea 3: Grants for Research Trios

Our third proposal is an approach for expanding the Mitacs/Alliance model to make it work better for AI. Industry–academia partnerships leverage two key kinds of expertise from the academic side: methodological know-how for solving problems and knowledge about the application domain used for formulating such problems in the first place. In fields for which the set of industry partners is relatively small and relatively stable, it makes sense to ask the same academics to develop both kinds of expertise. In very general-purpose domains like AI, it holds back progress to ask AI experts to become domain experts, too. Instead, it makes sense to seek domain knowledge from other academics who already have it. We thus propose a mechanism that would fund “research trios” rather than bilateral research pairings. Each trio would contain an AI expert, an academic domain expert, and an industry partner. This approach capitalizes on the fact that there is a huge pool of academic talent outside core AI with deep disciplinary knowledge and a passion for applying AI. While such researchers are typically not in a position to deeply understand cutting-edge AI methodologies, they are ideally suited to serve as a bridge between researchers focused on AI methodologies and Canadian industrial players seeking to achieve real-world productivity gains. In our experience at UBC, the pool of non-AI domain experts with an interest in applying AI is considerably larger than the pool of AI experts. One advantage of this model is that projects can be initiated by the larger population of domain experts, who are also more likely to have appropriate connections to industry. Beyond this, involving domain experts increases the likelihood that a project will succeed and gives industry partners more reason to trust the process while a solution is being developed. The model meets a growing need for funding researchers outside computer science for projects that involve AI, rather than concentrating AI funding within a group of specialists. At the same time, it avoids the pitfall of encouraging bandwagon-jumping “applied AI” projects that lack adequate grounding in modern AI practices. Finally, it not only transfers AI knowledge to industry, but also does the same to both the domain expert and their students.

Idea 4: Greatly Expanded AI Training

As AI permeates the economy, Canada will face an increasing need for AI expertise. Today, that training comes mostly in the form of computer science degrees. Just as computer science split off from mathematics in the 1960s, AI is emerging today as a discipline distinct from computer science. In part, this shift is taking the form of recognizing that not every AI graduate needs to learn topics that computer science rightly considers part of its core, such as software engineering, operating systems, computer architecture, user interface design, computer graphics, and so on. Conversely, the shift sees new topics as core to the discipline. Most fundamental is machine learning. Dedicated training in AI will require a deeper focus on the mathematical foundations of probability and statistics, building to advanced topics such as deep learning, reinforcement learning, machine learning theory, and so on. Various AI modalities also deserve separate study, such as computer vision, natural language processing, multiagent systems, robotics, and reasoning. Training in ethics, optional in most computer science programs, will become essential.

Beyond dedicated training in the core discipline, we anticipate huge demand for broad-audience AI literacy training; for AI minors to complement other disciplinary specializations; for continuing education and “micro-credential” programs; and for executive education in AI. There is also a growing need for “AI Adoption Facilitators”: bridge-builders who can help established workers in medium-to-large organizations understand how data-driven tools could offer value in solving the problems they face. Training for this role would emphasize business principles and domain expertise, but would also require firmer foundations in machine learning and data science than are currently typical in those disciplines.

Read the full report via our friends at C.D. Howe Institute here.

University Of Maryland Will Investigate AI Limitations Via Army Research Grant

COLLEGE PARK, MD: University of Maryland College of Information Studies (UMD iSchool) researchers, led by principal investigator Dr. Susannah Paletz, have been awarded a three-year $616,700 usd grant funded by the Army Research Office (ARO), overseen by ARO Program Manager Dr. Edward Palazzolo. This project examines how teams of intelligence analysts can work together and with artificial intelligence (AI). AI has the potential to support intelligence analysts in reviewing potentially hundreds of thousands of source documents, pulling out key findings, and assembling them into actionable intelligence. AI can also aid in the flow of information and projects among members of the intelligence team, improving the efficiency and accuracy of their work.

“AI-driven technology has sometimes been touted as a replacement for human intelligence,” said Dr. Adam Porter, the project’s co-principal investigator, professor at the UMD Department of Computer Science, and Executive and Scientific Director of the Fraunhofer USA Center for Experimental Software Engineering (CESE). “In practice, however, AI doesn’t always work, or gives limited or biased answers. Human oversight is still required, and it’s therefore critical that we deeply understand how humans and AI can work best together.”

The Human-Agent Teaming on Intelligence Tasks project coordinated through the iSchool will focus on two particular research areas; 1.) how interactive AI agents, such as chatbots, have the ability to mitigate or exacerbate the communication and coordination problems that can occur with shift handovers of intelligence work, such as inaccuracy blindness and overlooking potentially relevant information, and 2.) examining how humans could potentially deal with these blind spots, biases, or inaccuracies. 

The research team plans to develop an experimental infrastructure to help test team cognition challenges within the work completed by intelligence analysts consisting of task-relevant input materials, such as mission descriptions and source documents, activity recording tools, experimental monitoring capabilities, and different AI supports for human analysts, such as chatbots offering advice on a particular task. 

“We want to develop a task that can raise the problems with asynchronous team cognition in intelligence tasks, but is simple enough to be used by research participants with minimal training,” said Dr. Susannah B.F. Paletz, research professor at the UMD iSchool, and affiliate at the UMD Applied Research Laboratory for Intelligence and Security (ARLIS). 

Susannah Paletz
Dr. Susannah B.F. Palletz


This task will substantially increase insight into the strengths and weaknesses of AI technology to support intelligence tasks, help shed light on how and when human analysts can safely place their trust in AI technology, and how they can proactively identify problems in AI-generated input. It will also aid teams of humans, including asynchronous teams, working together in situations that include AI-generated input.

“This basic research is an important step in the early process of learning how humans and agents can collaboratively become a single team with considerably greater capacity and productivity than human only teams,” Palazzolo said. “Moreover, this research has broad implications into the work of many teams focused on knowledge work and information management such as medical teams involved in shift work, collaborative software development teams, and research teams.”

In addition to Porter, the Fraunhofer USA team also includes Dr. Madeline Diep, Senior Scientist, and Jeronimo Cox, Software Developer, at Fraunhofer USA CESE. The Fraunhofer USA team will lead the effort to create configurable AI agents used in the experimental tasks, and a data collection and analysis infrastructure for capturing and understanding participant behaviors.

The UMD iSchool team includes graduate students Tammie Nelson, a fourth year PhD student, Melissa Carraway, incoming first year PhD student, and Sarah Valhkamp, incoming first year PhD, in Information Science.

The grant proposal team includes UMD Office of Research Administration Contract Manager, Stephanie Swann; iSchool Business Manager, Jacqueline Armstrong; and former iSchool Business Manager, Lisa Geraghty.

Outside of UMD, Dr. Aimee Kane, the Harry W. Witt Faculty Fellow and an Associate Professor of Management in the Palumbo-Donahue School of Business at Duquesne University, will be a consultant and an intellectual contributor on this project.

ARO is an element of the U.S. Army Combat Capabilities Development Command’s (CCDC) Army Research Laboratory. The Human-Agent Teaming on Intelligence Tasks project (grant no. W911NF-20-1-0214) is slated to run through June 30, 2023. For the Silo, Mia Hinckle.

New App Demystifies Coding For Kids

NEW YORK, NY (PRWEB)- According to the White House, by 2018, 51 percent of STEM jobs will be in computer science-related fields. However, the number of tech employees has not increased along with the number of jobs available. Why? The answer is simple: lack of relevant education. The White House maintains that just one quarter of K-12 schools offer high-quality computer science with programming and coding. In addition, in 2016, the PEW Research Center reported that only 17% of adults believed they were “digitally ready.” Technology is changing the way that we live and work, and it’s happening fast. So how do we ensure that individuals (especially girls and women) are digitally literate?

In my new interview below with C.M. Rubin (founder of CMRubinWorld), Derek Lo says he started Py because he wanted to demystify “coding”. His app does this by making coding fun. The program also avoids using any programming jargon until the learner is ready. Lo states that “gamification isn’t a hindrance to learning—-it accelerates it.” He further notes that coding “instills a greater aptitude for systematic thinking and logical decision making.” Lo recently partnered with the not for profit Girls Who Code to further reduce the gender gap and “change people’s image of who a coder is.”

Coding in language children understand

“We specifically write our content using language that even young children can understand.” — Derek Lo

Why were 600,000 high-paying tech jobs unfilled in 2015 in the United States alone, or is the better question: Is technology developing faster than humans can learn to handle it?

When we look at diversity, things only get worse. In 2015, 22 percent of students taking the AP Computer Science exam were girls while 13 percent were African-American or Latino. These statistics are not U.S. specific; in 2015, Australia reported that only 28 percent of ICT jobs were held by women.

Coding has always been regarded as a mysterious field, something Derek Lo, co-founder of the new application “Py”, wants to change. Launched in 2016, the application offers interactive courses on everything from Python to iOS development. The “unique value proposition,” as Lo puts it, has been a revolutionary success. The fun-oriented application has so far resulted in over 100,000 downloads on both iTunes and Google Play.

Most parents frown when kids use their phones at the dinner table, but what if the kids were learning to code over Sunday roast? “Ok, so maybe not the Sunday roast, but seriously, could a more accessible and fun coding application make all the difference?”

The Global Search for Education is excited to welcome one of Py’s founders, Derek Lo, to discuss how Py’s revolutionary approach is literally making coding cool.

Coding creates websites but also stimulates thought

“Coding can provide people with the awesome ability of being able to create tangible things like websites and apps. It also instills less tangible things like a greater aptitude for systematic thinking and logical decision making.” — Derek Lo

People say education today is often treated as a business and that individual students’ needs have not been prioritized enough. As the number of qualified applicants increases, can individualized learning tools, such as Py, help today’s generations remain competent in our globalized world, even with “broken” education systems?

Yes. As college acceptance rates decline, more people will need alternatives for learning career-essential skills, and we believe Py will be a big part of that. Using machine learning algorithms, we’re able to adapt the user experience based on prior skill and behavior within the app, creating a tailored curriculum. Having a personal tutor in your pocket that knows how you learn and what you should be learning is powerful and why we are investing in personalization.

Py App On Google Play

Py provides its users with a simple and easy platform while many other coding applications (e.g. Solo Learn) have opted for more traditional and serious lesson plans. Does making learning applications appear more serious fuel the conception that coding is a hard and scary thing to learn? Are we over-complicating the field of coding and making it seem inaccessible for people or should students really be this wary of programming?

One of the reasons that my co-founder and I started Py is to demystify “coding”. We make it easy by making it fun. When you’re dragging pretty blocks around and pressing colorful buttons, it doesn’t feel like work. Yet users are still soaking up all the same knowledge they would be by slogging through a boring textbook. We also intentionally avoid programming jargon until the learner is ready. A good example is when we teach users about loops—-we use words like “repeat” instead of “iterate”. Almost all of Py’s courses are focused on teaching the fundamental concepts using simple language and in an interactive fashion.

Also, many people are scared away from learning how to code because they hear from friends that computer science is such a difficult major in school. An important thing to realize is that there’s a big difference between theoretical computer science and making a simple website. An art major might not need to understand Dijkstra’s algorithm, but would greatly benefit from knowing a bit of HTML and CSS.

Getting Young Adults Interested In Coding

“We’re extremely excited about helping to change people’s image (and self-image) of who a coder is and actively encourage more girls to get into coding.” — Derek Lo

What would you say to skeptics who question whether a game-like application like Py can truly help people learn how to code properly?

Gamification isn’t a hindrance to learning—-it accelerates it. By keeping you excited and engaged, Py teaches you better than if you got bored or zoned out. When you’re having fun, you actually learn faster and better.

Another way to phrase this question might be, “Even if Py is fun, do you walk away having learned something from it?” The answer is yes, definitely. We’re very data-driven, constantly improving our courses by analyzing our users’ progress. We can see (and track) real progress in our users’ ability to understand everything from basic semantics to high-level algorithms and design principles.

Do you think Py’s game-like surface allows younger generations to become more involved with coding?

Yes. We specifically write our content using language that even young children can understand. In fact, a parent emailed us just the other day telling us he was using Py to teach his 10-year old son Python! Currently our target demographic is definitely a bit older than that though. We think of Py as the learn-to-code solution for the SnapChat generation.

What general skills does coding teach kids/ young adults?

Coding can provide people with the awesome ability of being able to create tangible things like websites and apps. It also instills less tangible things like a greater aptitude for systematic thinking and logical decision making.

Understand Algorithm Before Typing It

“Once you understand how an algorithm works, typing it out should be an afterthought. The important thing is to understand it—once you do, it’s yours forever.” — Derek Lo

Py has recently partnered with Girls Who Code. Why do you think coding has been branded throughout history as a ‘male’ profession and how do you hope to eliminate this gender gap?

Historically some of the most important computer scientists are women. Ada Lovelace and Grace Hopper are considered pioneers of programming. Stereotypes aside, men and women are obviously equally capable of becoming great software engineers. We’re extremely excited about helping to change people’s image (and self-image) of who a coder is and actively encourage more girls to get into coding. We’re huge fans of Girls Who Code and we’re so excited to provide them free premium subscriptions for some of their students.

When we think of coding, we mostly envision computer screens, yet we tend to use our phones more often than we do our computers. How does Py bridge the gap between using a computer screen as opposed to learning how to code on smaller devices? Is the coding world shifting to using smartphones or is coding still a generally ‘computer’ based field?

People actually don’t need to type lots of code to learn the concepts necessary to become great programmers. We’ve built interaction types like “fill-in-the-blank” that let users quickly edit code on the fly without any typing. Recently we’ve also created a custom keyboard that allows users to type real code on their phones in a friction-less way. This is great for short programs and practicing the fundamentals, and it’s how we’re making the transition from computer to phone and vice versa easier. Applying this knowledge to create a website or app does still primarily take place on computers. But the world is seeing a wave of new mobile learning applications, and I think we’re at the forefront of that trend.

How do you envision the world of coding changing in the next 15-20 years? How will Py keep up with these changes in the field?

Coding will become less about rote memorization of basic syntax and more about high-level understanding of what’s really going on. At a minimum, programming languages have morphed from low-level (shifting bits and allocating memory) to high-level (abstract data structures and functional programming), from obtuse (assembly, machine code) to human friendly (Python, Swift).

That’s why Py focuses on high-level concepts. Once you understand how an algorithm works, typing it out should be an afterthought. The important thing is to understand it—once you do, it’s yours forever.

CM Rubin and Derek Lo
(l) C. M. Rubin & (r) Derek Lo

 

(All photos are courtesy of CMRubinWorld except featured image by J. Barker)

For the Silo, David Wine /CMRubinWorld with contributions by Zita Petrahai.