Inpainting techniques are widely known for their ability to reconstruct damaged regions in images, resulting in visually plausible versions of them. A single image can be reconstructed in countless ways, with the results varying depending on the method used and the parameters set, making the quality evaluation a challenging task. Different assessment metrics can be applied to verify the quality of the results; however, a reference image is required, which is rarely available. The absence of a reference image is why most researchers opt to evaluate results with subjective opinions, removing its reliability. This study presents IQAENet: a Deep Learning approach capable of providing objective quality assessment over inpainted images without a reference image. This network was trained using an auto-generated dataset consisting of pairs of reconstructed and damaged images alongside their original Peak signal-to-noise ratio (PSNR) and and Structural Similarity (SSIM) measurements. Results show a determination coefficient of 0.85 for PSNR and 0.88 for SSIM, which suggests the proposed model can predict the quality of the reconstructed image with high fidelity.