The report is the company's first publication of its enforcement numbers against violations of its community standards.
The report highlights six key areas: fake accounts, spam, adult nudity and sexual activity, graphic violence, terrorist propaganda and hate speech.
Facebook removed 2.5 million pieces of hate speech in the three months to March, a rise of more than half from the three months prior.
While the 583 million fake Facebook accounts and their removal is perhaps the biggest takeaway from this report, the company pointed out how the metrics of flagging and removal had improved when compared to previous quarters - such as improvements in photo detection technology that can detect both old and newly posted content. It believes about 3-4 percent of active Facebook accounts on the site in Q1 were still fake.
Facebook took down or applied warning labels to 3.4 million pieces of violent content in the three months to March - a 183 percent increase from the final quarter of 2017. Nearly 100 percent of the spam and 96 percent of the adult nudity was flagged for takedown, with the help of technology, before any Facebook users complained about it.
The numbers were disclosed in a report Tuesday that breakdown how much material Facebook removes for violating service terms.
Facebook product manager Ms Sara Su said a lot of hateful content in Myanmar is still unreported or misreported.
At its heart, artificial intelligence technology is about systems that learn by example.
It said the growth was a possible result of a higher volume of graphic violence content shared on Facebook in the first three months of this year. Facebook said users were more aggressively posting images of violence in places like war-torn Syria. This is in addition to the millions of fake account attempts Facebook said it prevents daily from ever registering with Facebook.
How much content we detected proactively using our technology - before people who use Facebook reported it.
"Today, as we sit here, 99 percent of the ISIS and al-Qaida content that we take down on Facebook, our AI systems flag before any human sees it", Zuckerberg said at the hearing.
For hate speech, Facebook's human reviewers and computer algorithms identified just 38 percent of the violations.
The company has been using artificial intelligence to help pinpoint the bad content, but Rosen said the technology still struggles to understand the context around a Facebook post pushing hate, and one simply recounting a personal experience.
Facebook acknowledged it has work to do when it comes to properly removing hate speech.
The committee has also urged Facebook boss Mark Zuckerberg to appear before them, adding that it would be open to taking evidence from the billionaire company founder via video link if he would not attend in person.