Natural languages have existed for quite a while. Due to this simple but obvious fact, they have also been subject to linguistic evolution. Linguistic evolution is quite different from the evolution of multicellular organisms. I am not going to talk about 'strong languages' supplanting 'weak languages', since this is not what I am thinking of at all - I am thinking of how the parts of a language themselves are selected for.
Languages consist of many bits and pieces. These bits and pieces do not really have an existence of their own. They exist as pathways in neural networks, viz. the brain. When an infant learns to speak, it observes the linguistic productions of other humans, and tries to identify the patterns in these productions. This we call learning, and learning is adjusting the weights in the pathways in the brain.
What relevance does this have to the evolution of a language? Well, clearly, patterns will be smoothed out in some sense. Our brains are fairly similar, but they are not identical copies. One brain might not spot the same pattern as another, and thus fail to generalize it. Thus, over time, only patterns that most brains catch will survive. Thus, the fact that some feature is in the grammar of a natural language is not only testament to this feature being learnable, but to it being some kind of local optimum for learnability - other similar patterns that are less learnable will turn into that pattern.
Of course, diligent practice - in a formalized setting - can make a less learnable pattern dominate, but that requires quite some effort. In part, some of the 'prescriptive' grammar ideas we hear are taught in such ways so as to enforce some rather unusual meanings.
Further, when a natural language lacks methods of expressing something that real life makes necessary (or at least favourable) to be able to express, a method for expressing it will soon appear; gaps are filled quickly, and good contenders for filling the gap will survive. And they will filter through the speech community. Thus, huge gaps in human interactions will not exist for long, and patterns for generalizing them will fill it out faster.
When you speak your conlang to your child, you'll end up having to invent a whole lot more of this on the run that when you speak your native language. And you won't have the time to go and jot down what you thought up, and chances are you won't remember what approach you took, thus making it quite likely you'll end up with an inconsistent hodgepodge, making it hard for the child to be able to rely on the linguistic stimulus it is exposed to.
When you've designed your language consciously, it has not gone through this smoothing process either - thus it may have features that are unlikely to be parsed the way you prescribe them to be parsed by a first-language-learner, or it might have features that are just cognitively unlikely to work out - phonological distinctions used in ways that are (too) hard to resolve, morphological and syntactic things that cannot really be figured out without formal teaching, etc. We don't know how complicated things an average child can be expected to be able to figure out.
In real languages, much of the redundancy we see appearing (so-and-so, it went ..., so-and-so, he went ... etc) are attempts at improving the likelihood that the hearer gets enough data right. If no one ever had heard things wrong or indicated that he didn't catch that word, it's a bit unlikely we'd go and waste time and effort adding a lot of extra syllables here and there. However, we do add them - and it seems this helps reduce the amount of mishearing and so on. However, if you have a formalized grammar that you've struggled to internalize, you'll be quite likely to think that the formalized version you have is not to be adjusted by such tricks - you'll stick to the levels of redundancy in the language you've made. What if the level of redundancy is insufficient? Are you even likely to realize this from your child's reactions? Notice that the way real languages do this is by being a large distributed algorithm where - in some cases dozens, in some cases hundreds, in some cases millions of people are involved in adjusting the parameters of redundancy and in testing out people's hearing. Some of them do have too little redundancy, some quite enough.
But with your child, it's quite likely you won't have the support of a huge community of other people randomly fine-tuning it like that, and it's hard to say whether your interactions - especially early on - will give a sufficient idea of whether the language has enough redundancy (or contrarily, way too much).
This is of course not all there is to it, there's even more similar arguments that can be presented.