While some Silicon Valley giants grapple with the ethics of offering cutting-edge artificial intelligence technology to military and law enforcement agencies with histories of abuse, Amazon, apparently, has no reservations.

When asked about the culture “gap” between Amazon employees — who have protested the sale of facial recognition technology to law enforcement — and the company’s “executive level” interests, Teresa Carlson, vice president of the worldwide public sector of Amazon Web Services, was frank. “We are committed to our customer, and we are unwaveringly committed to the U.S. government and the governments we work with around the world,” Carlson declared at the Aspen Security Forum on July 20 in Colorado.

Carlson’s remarks, largely unreported outside of a mention in a technology trade journal, mark the lengthiest public discussion in recent months of Amazon’s role as a technology provider for military and law enforcement, which has been a source of substantial controversy for the company.

Amazon workers recently circulated a letter to chief executive Jeff Bezos protesting the sale of Amazon’s facial recognition software, called Rekognition, to law enforcement; the letter cited the Department of Homeland Security’s “increasingly inhumane treatment of refugees and immigrants.” The American Civil Liberties Union raised concerns about the sale of Amazon’s technology, charging that Rekognition could allow police to constantly monitor and harass ethnic minorities and political dissidents. Microsoft and Salesforce have similarly faced internal pressure from employees to end work on behalf of DHS, with workers outraged that their companies enable U.S. immigration policies.

“Employees need a voice,” said Carlson, regarding the recent criticism. “I can’t speak for any other company, but we want to work with our government,” she added. “We feel compelled. … We believe government should have the same capability — our war fighters out there in the field, our civil servants — should have those same capabilities.” Carlson acknowledged that “you’re always gonna have bad actors,” but went on to laud the positive applications of the software.

When asked by New York Times reporter Cecilia Kang if Amazon has “drawn any red lines, any standards, guidelines, on what you will and you will not do in terms of defense work,” Carlson demurred.

“We have not drawn any lines there,” Carlson responded. “We are unwaveringly in support of our law enforcement, defense, and intelligence community.” She went on to admit that Amazon often doesn’t “know everything they’re actually utilizing the tool for,” but insisted that the U.S. government should have the most “innovative and cutting-edge tools” available so that it isn’t bested by its “adversaries.” Carlson raised “ethical-use rights” as a mechanism by which Amazon could reclaim its technology if it were used illegally, though it’s not clear how that would impact legal activities, like those undertaken by U.S. Immigration and Customs Enforcement, to which its employees object.

Amazon, known for its web server service and e-commerce business, is quickly moving into the defense and homeland security space, offering an array of machine learning capabilities to the Pentagon and police agencies around the country. The company currently manages a major cloud computing solution for U.S. intelligence agencies, and is bidding on a $10 billion contract to provide similar cloud solutions to the Defense Department.

Leaked emails obtained by The Intercept revealed that Amazon provided “some work loads” on the controversial Project Maven initiative launched by the Defense Department last year. The contract is the Pentagon’s first major effort to integrate Silicon Valley-developed machine learning technology into the military’s capabilities. The initiative applies AI technology to help analysts identify images captured by drones on the battlefield by automatically cataloguing people, buildings, and events.

The targeting efforts will be used for Predator and Reaper drones to strike targets in “in the Middle East,” according to Defense One. Critics charge that the use of advanced AI technology in drone warfare will only encourage the practice of “pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage.” President Donald Trump has reportedly increased the pace of drone strikes to four to five times the rate of President Barack Obama, who had increased the drone strike rate to 10 times that of President George W. Bush.

Google won a Project Maven contract but attempted to conceal the company’s involvement. It was eventually reported by Gizmodo and The Intercept in March. In response, several employees at Google quit in protest, and thousands signed a letter demanding that the company end its ties with the military. Many expressed shock that Google appeared to have violated its old mantra, “Don’t be evil,” by working with the military, and had misled its own employees when it claimed that any work with the Pentagon would not be used for lethal purposes — a goal explicitly stated by top Air Force brass.

Google has since said that it will not renew its work on Project Maven when the contract expires next year, and it has worked to develop a set of ethical principles for the use of AI technology. The company, however, has not sworn off future defense-related contracts.

Last week, Amazon faced another wave of criticism after a study by the ACLU found that its facial recognition software falsely identified 28 members of Congress, a majority of whom were people of color.

An unnamed Amazon spokesperson told the Verge, USA Today, and other publications that the results from the congressional study were a result of poor calibration. The spokesperson said the technology could not be used with a “reasonable level of certainty” with the accuracy threshold set by the ACLU for its experiment.