Say that you're a venture capitalist and some researcher-entrepreneur comes up to you, pitching an idea for a new battery that costs the same as current lithium-ion batteries but holds 50% more charge. You'd evaluate that technical proposal on technical grounds -- reviewing your own knowledge of the physics of batteries as well as consulting others who are scientifically well-versed in the matter to determine whether the proposal is possible. Near-term feasibility would also enter into the equation -- if the battery took a very long time to make, then you might prefer to invest in something else with a near-term payoff.
Now consider the proposal that many transhumanists are putting forth -- that we should create Friendly, roughly human-equivalent AI with the capacity for recursive self-improvement. Obviously, this is a more ambitious project that creating a better battery. Still, like the battery, it deserves an evaluation on technical grounds to the extent that this is possible. Unfortunately for advocates of Friendly AI, this proposal is also subject to obstacles of questioning that the battery proposal is not -- evaluations based on philosophical baggage. Batteries aren't loaded down with much philosophical baggage, due to their near-indifferent political relevance and general absence of moral valence.
The philosophical baggage around the possibility of self-improving Friendly AI is abundant. Even the very notion of human-equivalent AI offends human sensibilities, not to mention the idea that morality is something that can just be... programmed. To scientific materialists who believe in functionalism, many of these questions are non-issues, but Friendly AI is still subjected to scrutiny above and beyond a straightforward technical proposal, much of it justified. Ethical questions like, "would it be right to create something smarter than us?", feasibility questions like, "can a being of intelligence N create a being of intelligence N+1?", and so on. These numerous questions make arguing that Friendly AI is feasible and a good idea very challenging and multi-faceted. A philosopher might do well to take up the issue as an interesting challenge even if they had no personal attachment to it.
Exploring the philosophical obstacles around acceptance of Friendly AI is a topic worth 1,000 posts at least, so I won't go into detail here, merely call attention to their existence. I will just mention the second challenge in the battery proposal as applied to Friendly AI -- the near-term feasibility issue. As Ray Kurzweil argues in The Singularity is Near, if brain-scanning resolution and computing power continue to improve at the rate they have for decades, then we will have the technology to upload detailed functional algorithms of the human brain by around 2030. Arguing against this upper limit requires either stating that 1) brain scans and computers will halt their exponential improvement at some point in the next 20 years, 2) extremely subtle and complex low-level features must be duplicated to duplicate human intelligence, or 3) human intelligence cannot be simulated in a computer, even in principle.
I feel that arguments #2 and #3 can be dismissed rather readily. I also think that much of the motivation for standing behind #2 comes from a milder sentiment of #3, namely that if human intelligence can in fact be simulated in a computer, we should expect it to take hundreds of years for the task to be accomplished. That is the position of Doug Hofstadter. Those that take this line of argument would be very disappointed if humans were "so simple" that simulating our minds in computers by 2030 would be possible. Rebutting this argument comprehensively would take numerous posts on its own -- I hope you realize why I'm bringing up issues without exploring them each in detail. (I would be writing this post for hours and have no time for anything else.) I can just say that the complexity and awe of the human mind would be in no way diminished if it turned out that analyzing features "merely" at the neuron level -- of which there are 100 billion -- would be a sufficient level of description. There are arguments that can be made from the way that evolution works and cognitive science experiments that make it implausible that exabytes of specific sub-neuron details of brain biochemistry play an absolutely necessary and indisposable role in minds-in-general.
I think that a lot of the ideological fuel behind #2 and #3 comes from the misconception that AI would need to be a straightforward copy of the human brain to work at all. This argument is on par with the idea that someone would need to perfectly copy a bird to make a flying machine, or perfectly copy a mole or earthworm to make a digging machine. The reason that people consider it acceptable to apply to intelligence and not to digging is because of the mystical association of intelligence with a specific Gift From God, a unique cosmic quality only bestowed on Homo sapiens sapiens due to our unique specialness and spiritual importance. For some that have been raised to believe this since they could understand language, changing their minds might be outright impossible. They might also act like the targets in whack-a-mole, avoiding making specific claims about their deep-held view of human exceptionalism, instead extending arguments which sound superficially more plausible, but in fact are only advanced because of deeper reservations about the entire enterprise and its philosophical implications.
Still, say that I granted that intelligence was too complex to instantiate in computers for hundreds of years. Even then, I would argue that the near-term feasibility requirements usually considered so important for research proposals and venture capital-funded projects would be absolutely inapplicable in this case. Due to the earth-shaking implications of creating another intelligent species, especially one which would have the qualities of "instant intelligence, just add computing power", pursuing a benevolent course for the technology would be justified, even if we didn't think it would arrive for hundreds of years. Failure would mean the extinction of our species and its replacement by AI (a highly undesirable outcome in the eyes of many), while success would mean a tremendous injection of intelligence and wisdom into our society, wisdom that could be turned to humanitarian ends. Perhaps others are visualizing a smaller impact from "instant intelligence, just add computing power", than I am, in which case ignoring it if it were centuries out might make sense, but I think the "we might as well ignore this" quality more often comes from evaluating Friendly AI as if it were a business idea with a 3-5 year profitability horizon. Viewing such a momentous and transformational possibility in that light is obviously inappropriate.