The social media site will be rolling out a special AI, which will be trained to detect if someone could be feeling suicidal and get them the help they need before it is too late.
The AI was trialled in the United States in March and it works by scanning the content of Facebook posts and comments for certain phrases that could infer someone was considering committing suicide. Some of the phrases they are looking for includes 'Are you ok?' and 'Can I help?'
Guy Rosen, Facebook's vice president for product management, said: "Speed really matters. We have to get help to people in real time."
If the AI detects something of interest, it will hand over the information to a specialist team at Facebook, who offers certain resources to the person such as a phone line where they can talk. In some cases, they will contact the local police force if they feel it is necessary.
Rosen did not reveal which countries it would be rolling out to first but he said it will eventually be used worldwide except in the European Union because of "sensitivities".
It comes after Facebook partnered with a Australian Government agency to combat the rise in revenge porn.
e-Safety Commissioner Julie Inman Grant has revealed how victims of "image-based abuse" could take action to stop the pictures from being sent on Facebook, Instagram or Facebook Messenger.
She said: "We see many scenarios where maybe photos or videos were taken consensually at one point, but there was not any sort of consent to send the images or videos more broadly ...
"It would be like sending yourself your image in email, but obviously this is a much safer, secure end-to-end way of sending the image without sending it through the ether. They're not storing the image, they're storing the link and using artificial intelligence and other photo-matching technologies. So if somebody tried to upload that same image, which would have the same digital footprint or hash value, it will be prevented from being uploaded."