The article doesn't reflect kindly on the visions articulated by the AI company, so why would they have an incentive to release it if they weren't serious about alignment research?
Because publishing (potentially cherry picked - this is privately funded research after all) evidence their models might be dangerous conveniently implies they are very powerful, without actually having to prove the latter.